connect converge spring 2016

56
Delivering on the IoT Customer Experience Why Big Data Makes Big Sense For Every Size Business Congratulations to this Year’s Future Leaders in Technology Recipients! Hewlett Packard Enterprise Technology User Community

Upload: connect-converge

Post on 29-Jul-2016

231 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Connect Converge Spring 2016

Delivering on the IoT Customer Experience

Why Big Data Makes Big Sense For Every Size Business

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

Hewlett Packard Enterprise Technology User Community

XYGATE Data ProtectionAll the benefits of HPE SecureData

Learn more atxyprocomXDP

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

reg

Now available from

SimplicitySecure Stateless Tokenization

Format Preserving Encryption

Page Integrated Encryption

Standards Based AES

Stateless Key Management

No application changes

trade

Protect your Dataat rest and in transit

C

M

Y

CM

MY

CY

CMY

K

XYPRO XDP 2016pdf 1 332016 20601 PM

FeaturesCover Story

25 | Delivering On the IoT Customer Experience

Focus On

31 | Quick Guide to Increasing Profits with Big Data Technology

Technology

11 | Democratizing Big Data Value

15 | How to Survive The Zombie Apocalypse

17 | Local remote and centrally unified key management

33 | Security Concerns Clearpass Has You Covered

35 | Cyber Crime Report Has Important Insights For NonStop Users

45 | Integrating Data Protection Into Legacy Systems

49 | 3 Reasons to Modernize Your SAP Enviroment

Community

03 | Advocacy The HPE Helion Private Cloud and Cloud Broker Services

07 | Let Me Help You With Hyper-Converged

09 | Top Thinking Composable Infrastructure Breakthrough To Fast Fluid IT

23 | Reinvent Your Business Printing With HP

37 | Passing The Torch

41 | What in the World of Open Source

52 | Future Leaders In Technology

Connect Converge Staff click names to send us an email

CEOChief Executive Officer Kristi Elizondo

Editor-In-Chief Stacie Neall

Event Marketing Manager Kelly Luna

Art Director John Clark

Partner Relations Advertising Sales and Partner Relations

Click here to view Connect Board of Directors

25 33 09

1

Stacie J Neallsneallconnect-communityorg

With so much buzz around the Internet of Things I felt inclined to know who first coined the phrase My search lead me to Kevin Ashton He says he could be wrong but feels fairly certain that he said it first while working for Proctor amp Gamble in the late 1990s I am guessing he had no idea the IoT explosion would be this enormous While my fitness tracker may be a failed new year resolution -for many businesses and consumers IoT is well on the way to transforming the way we work live and play From smart cities to connected homes every aspect of our lives will be touched

It has been estimated that 24 billion IoT devices will be installed globally by 2020 and a whopping 6 trillion dollars will be invested in solutions that support IoT over the next 5 years In this issue page 25 learn more how the HPE Universal IoT Platform enables data monetization addresses challenges and delivers best outcomes for IoT success

We canrsquot wait to see you at HPE Discover this year Use the Connect code and save $300

And as always if you have a technical how-to or inspiring customer success story please share it with us Managing Editor sjneall

Stay Connected

Welcome to the Spring issue of Connect Converge

Editors Letter

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

And in case you havenrsquot heard getting connected with HPErsquos user community is easy and Free

2

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

3

Dr Bill Highleyman is the Managing Editor of The Availbility Digest (wwwavailabilitydigestcom) a monthly online publication and a resource of information on high and continuous availability topics His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer Dr Highleyman is a past chairman of ITUG the former HP NonStop Userrsquos Groupthe holder of nemerous US patents the author of Performance Analysis of Transaction Processing Systems and the co-author of the three volume ser ies Break ing the Availability Barrier

The HPE Helion Private Cloud and Cloud Broker ServicesDr Bill Highleyman

Managing Editor

Availability Digest

ADVOCACY

First ndash A Reminder Donrsquot forget the HP-UX Boot Camp which will be held in Chicago from April 24th through April 26th Check out the Connect website for details

HPE Helion HPE Helion is a complete portfolio of cloud products and serv ices that of fers enterprise security scalability and performance Helion enables customers to deploy open and secure hybrid cloud solutions that integrate private cloud services public cloud services and existing IT assets to allow IT departments to respond to fast changing market conditions and to get applications to market faster HPE Helion is based on the open-source OpenStack cloud technology

The Helion portfolio includes the Helion CloudSystem which is a private cloud the Helion Development Program which offers IT developers a platform to build deploy and manage cloud applications quickly and easily and the Helion Managed Cloud Broker which helps customers to deploy hybrid clouds in which applications span private and public clouds

In its initial release HPE intended to create a public cloud

How a Hybrid Cloud Delivery Model Transforms IT(from ldquoBecome a cloud service brokerldquo HPE white paper)

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 2: Connect Converge Spring 2016

XYGATE Data ProtectionAll the benefits of HPE SecureData

Learn more atxyprocomXDP

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

reg

Now available from

SimplicitySecure Stateless Tokenization

Format Preserving Encryption

Page Integrated Encryption

Standards Based AES

Stateless Key Management

No application changes

trade

Protect your Dataat rest and in transit

C

M

Y

CM

MY

CY

CMY

K

XYPRO XDP 2016pdf 1 332016 20601 PM

FeaturesCover Story

25 | Delivering On the IoT Customer Experience

Focus On

31 | Quick Guide to Increasing Profits with Big Data Technology

Technology

11 | Democratizing Big Data Value

15 | How to Survive The Zombie Apocalypse

17 | Local remote and centrally unified key management

33 | Security Concerns Clearpass Has You Covered

35 | Cyber Crime Report Has Important Insights For NonStop Users

45 | Integrating Data Protection Into Legacy Systems

49 | 3 Reasons to Modernize Your SAP Enviroment

Community

03 | Advocacy The HPE Helion Private Cloud and Cloud Broker Services

07 | Let Me Help You With Hyper-Converged

09 | Top Thinking Composable Infrastructure Breakthrough To Fast Fluid IT

23 | Reinvent Your Business Printing With HP

37 | Passing The Torch

41 | What in the World of Open Source

52 | Future Leaders In Technology

Connect Converge Staff click names to send us an email

CEOChief Executive Officer Kristi Elizondo

Editor-In-Chief Stacie Neall

Event Marketing Manager Kelly Luna

Art Director John Clark

Partner Relations Advertising Sales and Partner Relations

Click here to view Connect Board of Directors

25 33 09

1

Stacie J Neallsneallconnect-communityorg

With so much buzz around the Internet of Things I felt inclined to know who first coined the phrase My search lead me to Kevin Ashton He says he could be wrong but feels fairly certain that he said it first while working for Proctor amp Gamble in the late 1990s I am guessing he had no idea the IoT explosion would be this enormous While my fitness tracker may be a failed new year resolution -for many businesses and consumers IoT is well on the way to transforming the way we work live and play From smart cities to connected homes every aspect of our lives will be touched

It has been estimated that 24 billion IoT devices will be installed globally by 2020 and a whopping 6 trillion dollars will be invested in solutions that support IoT over the next 5 years In this issue page 25 learn more how the HPE Universal IoT Platform enables data monetization addresses challenges and delivers best outcomes for IoT success

We canrsquot wait to see you at HPE Discover this year Use the Connect code and save $300

And as always if you have a technical how-to or inspiring customer success story please share it with us Managing Editor sjneall

Stay Connected

Welcome to the Spring issue of Connect Converge

Editors Letter

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

And in case you havenrsquot heard getting connected with HPErsquos user community is easy and Free

2

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

3

Dr Bill Highleyman is the Managing Editor of The Availbility Digest (wwwavailabilitydigestcom) a monthly online publication and a resource of information on high and continuous availability topics His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer Dr Highleyman is a past chairman of ITUG the former HP NonStop Userrsquos Groupthe holder of nemerous US patents the author of Performance Analysis of Transaction Processing Systems and the co-author of the three volume ser ies Break ing the Availability Barrier

The HPE Helion Private Cloud and Cloud Broker ServicesDr Bill Highleyman

Managing Editor

Availability Digest

ADVOCACY

First ndash A Reminder Donrsquot forget the HP-UX Boot Camp which will be held in Chicago from April 24th through April 26th Check out the Connect website for details

HPE Helion HPE Helion is a complete portfolio of cloud products and serv ices that of fers enterprise security scalability and performance Helion enables customers to deploy open and secure hybrid cloud solutions that integrate private cloud services public cloud services and existing IT assets to allow IT departments to respond to fast changing market conditions and to get applications to market faster HPE Helion is based on the open-source OpenStack cloud technology

The Helion portfolio includes the Helion CloudSystem which is a private cloud the Helion Development Program which offers IT developers a platform to build deploy and manage cloud applications quickly and easily and the Helion Managed Cloud Broker which helps customers to deploy hybrid clouds in which applications span private and public clouds

In its initial release HPE intended to create a public cloud

How a Hybrid Cloud Delivery Model Transforms IT(from ldquoBecome a cloud service brokerldquo HPE white paper)

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 3: Connect Converge Spring 2016

FeaturesCover Story

25 | Delivering On the IoT Customer Experience

Focus On

31 | Quick Guide to Increasing Profits with Big Data Technology

Technology

11 | Democratizing Big Data Value

15 | How to Survive The Zombie Apocalypse

17 | Local remote and centrally unified key management

33 | Security Concerns Clearpass Has You Covered

35 | Cyber Crime Report Has Important Insights For NonStop Users

45 | Integrating Data Protection Into Legacy Systems

49 | 3 Reasons to Modernize Your SAP Enviroment

Community

03 | Advocacy The HPE Helion Private Cloud and Cloud Broker Services

07 | Let Me Help You With Hyper-Converged

09 | Top Thinking Composable Infrastructure Breakthrough To Fast Fluid IT

23 | Reinvent Your Business Printing With HP

37 | Passing The Torch

41 | What in the World of Open Source

52 | Future Leaders In Technology

Connect Converge Staff click names to send us an email

CEOChief Executive Officer Kristi Elizondo

Editor-In-Chief Stacie Neall

Event Marketing Manager Kelly Luna

Art Director John Clark

Partner Relations Advertising Sales and Partner Relations

Click here to view Connect Board of Directors

25 33 09

1

Stacie J Neallsneallconnect-communityorg

With so much buzz around the Internet of Things I felt inclined to know who first coined the phrase My search lead me to Kevin Ashton He says he could be wrong but feels fairly certain that he said it first while working for Proctor amp Gamble in the late 1990s I am guessing he had no idea the IoT explosion would be this enormous While my fitness tracker may be a failed new year resolution -for many businesses and consumers IoT is well on the way to transforming the way we work live and play From smart cities to connected homes every aspect of our lives will be touched

It has been estimated that 24 billion IoT devices will be installed globally by 2020 and a whopping 6 trillion dollars will be invested in solutions that support IoT over the next 5 years In this issue page 25 learn more how the HPE Universal IoT Platform enables data monetization addresses challenges and delivers best outcomes for IoT success

We canrsquot wait to see you at HPE Discover this year Use the Connect code and save $300

And as always if you have a technical how-to or inspiring customer success story please share it with us Managing Editor sjneall

Stay Connected

Welcome to the Spring issue of Connect Converge

Editors Letter

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

And in case you havenrsquot heard getting connected with HPErsquos user community is easy and Free

2

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

3

Dr Bill Highleyman is the Managing Editor of The Availbility Digest (wwwavailabilitydigestcom) a monthly online publication and a resource of information on high and continuous availability topics His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer Dr Highleyman is a past chairman of ITUG the former HP NonStop Userrsquos Groupthe holder of nemerous US patents the author of Performance Analysis of Transaction Processing Systems and the co-author of the three volume ser ies Break ing the Availability Barrier

The HPE Helion Private Cloud and Cloud Broker ServicesDr Bill Highleyman

Managing Editor

Availability Digest

ADVOCACY

First ndash A Reminder Donrsquot forget the HP-UX Boot Camp which will be held in Chicago from April 24th through April 26th Check out the Connect website for details

HPE Helion HPE Helion is a complete portfolio of cloud products and serv ices that of fers enterprise security scalability and performance Helion enables customers to deploy open and secure hybrid cloud solutions that integrate private cloud services public cloud services and existing IT assets to allow IT departments to respond to fast changing market conditions and to get applications to market faster HPE Helion is based on the open-source OpenStack cloud technology

The Helion portfolio includes the Helion CloudSystem which is a private cloud the Helion Development Program which offers IT developers a platform to build deploy and manage cloud applications quickly and easily and the Helion Managed Cloud Broker which helps customers to deploy hybrid clouds in which applications span private and public clouds

In its initial release HPE intended to create a public cloud

How a Hybrid Cloud Delivery Model Transforms IT(from ldquoBecome a cloud service brokerldquo HPE white paper)

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 4: Connect Converge Spring 2016

1

Stacie J Neallsneallconnect-communityorg

With so much buzz around the Internet of Things I felt inclined to know who first coined the phrase My search lead me to Kevin Ashton He says he could be wrong but feels fairly certain that he said it first while working for Proctor amp Gamble in the late 1990s I am guessing he had no idea the IoT explosion would be this enormous While my fitness tracker may be a failed new year resolution -for many businesses and consumers IoT is well on the way to transforming the way we work live and play From smart cities to connected homes every aspect of our lives will be touched

It has been estimated that 24 billion IoT devices will be installed globally by 2020 and a whopping 6 trillion dollars will be invested in solutions that support IoT over the next 5 years In this issue page 25 learn more how the HPE Universal IoT Platform enables data monetization addresses challenges and delivers best outcomes for IoT success

We canrsquot wait to see you at HPE Discover this year Use the Connect code and save $300

And as always if you have a technical how-to or inspiring customer success story please share it with us Managing Editor sjneall

Stay Connected

Welcome to the Spring issue of Connect Converge

Editors Letter

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

And in case you havenrsquot heard getting connected with HPErsquos user community is easy and Free

2

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

3

Dr Bill Highleyman is the Managing Editor of The Availbility Digest (wwwavailabilitydigestcom) a monthly online publication and a resource of information on high and continuous availability topics His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer Dr Highleyman is a past chairman of ITUG the former HP NonStop Userrsquos Groupthe holder of nemerous US patents the author of Performance Analysis of Transaction Processing Systems and the co-author of the three volume ser ies Break ing the Availability Barrier

The HPE Helion Private Cloud and Cloud Broker ServicesDr Bill Highleyman

Managing Editor

Availability Digest

ADVOCACY

First ndash A Reminder Donrsquot forget the HP-UX Boot Camp which will be held in Chicago from April 24th through April 26th Check out the Connect website for details

HPE Helion HPE Helion is a complete portfolio of cloud products and serv ices that of fers enterprise security scalability and performance Helion enables customers to deploy open and secure hybrid cloud solutions that integrate private cloud services public cloud services and existing IT assets to allow IT departments to respond to fast changing market conditions and to get applications to market faster HPE Helion is based on the open-source OpenStack cloud technology

The Helion portfolio includes the Helion CloudSystem which is a private cloud the Helion Development Program which offers IT developers a platform to build deploy and manage cloud applications quickly and easily and the Helion Managed Cloud Broker which helps customers to deploy hybrid clouds in which applications span private and public clouds

In its initial release HPE intended to create a public cloud

How a Hybrid Cloud Delivery Model Transforms IT(from ldquoBecome a cloud service brokerldquo HPE white paper)

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 5: Connect Converge Spring 2016

2

PJL support

WINDOWS SAP HOSTUNIXLINUX

Learn more at hollandhousecomunispool-printaurus

3

Dr Bill Highleyman is the Managing Editor of The Availbility Digest (wwwavailabilitydigestcom) a monthly online publication and a resource of information on high and continuous availability topics His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer Dr Highleyman is a past chairman of ITUG the former HP NonStop Userrsquos Groupthe holder of nemerous US patents the author of Performance Analysis of Transaction Processing Systems and the co-author of the three volume ser ies Break ing the Availability Barrier

The HPE Helion Private Cloud and Cloud Broker ServicesDr Bill Highleyman

Managing Editor

Availability Digest

ADVOCACY

First ndash A Reminder Donrsquot forget the HP-UX Boot Camp which will be held in Chicago from April 24th through April 26th Check out the Connect website for details

HPE Helion HPE Helion is a complete portfolio of cloud products and serv ices that of fers enterprise security scalability and performance Helion enables customers to deploy open and secure hybrid cloud solutions that integrate private cloud services public cloud services and existing IT assets to allow IT departments to respond to fast changing market conditions and to get applications to market faster HPE Helion is based on the open-source OpenStack cloud technology

The Helion portfolio includes the Helion CloudSystem which is a private cloud the Helion Development Program which offers IT developers a platform to build deploy and manage cloud applications quickly and easily and the Helion Managed Cloud Broker which helps customers to deploy hybrid clouds in which applications span private and public clouds

In its initial release HPE intended to create a public cloud

How a Hybrid Cloud Delivery Model Transforms IT(from ldquoBecome a cloud service brokerldquo HPE white paper)

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 6: Connect Converge Spring 2016

3

Dr Bill Highleyman is the Managing Editor of The Availbility Digest (wwwavailabilitydigestcom) a monthly online publication and a resource of information on high and continuous availability topics His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer Dr Highleyman is a past chairman of ITUG the former HP NonStop Userrsquos Groupthe holder of nemerous US patents the author of Performance Analysis of Transaction Processing Systems and the co-author of the three volume ser ies Break ing the Availability Barrier

The HPE Helion Private Cloud and Cloud Broker ServicesDr Bill Highleyman

Managing Editor

Availability Digest

ADVOCACY

First ndash A Reminder Donrsquot forget the HP-UX Boot Camp which will be held in Chicago from April 24th through April 26th Check out the Connect website for details

HPE Helion HPE Helion is a complete portfolio of cloud products and serv ices that of fers enterprise security scalability and performance Helion enables customers to deploy open and secure hybrid cloud solutions that integrate private cloud services public cloud services and existing IT assets to allow IT departments to respond to fast changing market conditions and to get applications to market faster HPE Helion is based on the open-source OpenStack cloud technology

The Helion portfolio includes the Helion CloudSystem which is a private cloud the Helion Development Program which offers IT developers a platform to build deploy and manage cloud applications quickly and easily and the Helion Managed Cloud Broker which helps customers to deploy hybrid clouds in which applications span private and public clouds

In its initial release HPE intended to create a public cloud

How a Hybrid Cloud Delivery Model Transforms IT(from ldquoBecome a cloud service brokerldquo HPE white paper)

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 7: Connect Converge Spring 2016

4

with Helion However it has since decided not to compete with Amazon AWS and Microsoft Azure in the public-cloud space It has withdrawn support for a public Helion cloud as of January 31 2016

The Announcement of HP Helion HP announced Helion in May 2014 as a portfolio of cloud products and services that would enable organizations to build manage and run applications in hybrid IT environments Helion is based on the open-source OpenStack cloud HP was quite familiar with the OpenStack cloud services It had been running OpenStack in enterprise environments for over three years HP was a founding member of the OpenStack Foundation and a leader in the OpenStack and Cloud Foundry communities

HPrsquos announcement of Helion included several initiatives

bull It planned to provide OpenStack public cloud services in twenty of its existing eighty data centers worldwide

bull It offered a free version of the HP Helion OpenStack Community edition supported by HP for use by organizations for proofs of concept pilots and basic production workloads

bull The HP Helion Development Program based on Cloud Foundry offered IT developers an open platform to build deploy and manage OpenStack cloud applications quickly and easily

bull HP Helion OpenStack Professional Services assisted customers with cloud planning implementation and operation

These new HP Helion cloud products and services joined the companyrsquos existing portfolio of hybrid cloud computing offerings including the HP Helion CloudSystem a private cloud solution

What Is HPE Helion HPE Helion is a collection of products and services that comprises HPErsquos Cloud Services

bull Helion is based on OpenStack a large-scale open-source cloud project and community established to drive industry cloud standards OpenStack is currently supported by over 150 companies It allows service providers enterprises and government agencies to build massively scalable public private and hybrid clouds using freely available Apache-licensed software

bull The Helion Development Environment is based on Cloud Foundry an open-source project that supports the full lifecycle of cloud developments from initial development through all testing stages to final deployment

bull The Helion CloudSystem (described in more detail later) is a cloud solution for a hybrid world It is a fully integrated end-to-end private cloud solution built for traditional and cloud native workloads and delivers automation orchestration and control across multiple clouds

bull Helion Cloud Solutions provide tested custom cloud

solutions for customers The solutions have been validated by HPE cloud experts and are based on OpenStack running on HP Proliant servers

OpenStack ndash The Open Cloud OpenStack has three major components

bull OpenStack Compute - provisions and manages large networks of virtual machines

bull OpenStack Storage - creates massive secure and reliable storage using standard hardware

bull OpenStack Image - catalogs and manages libraries of server images stored on OpenStack Storage

OpenStack Compute OpenStack Compute provides all of the facilities necessary to support the life cycle of instances in the OpenStack cloud It creates a redundant and scalable computing platform comprising large networks of virtual machines It provides the software control panels and APIs necessary for orchestrating a cloud including running instances managing networks and controlling access to the cloud

OpenStack Storage OpenStack Storage is modeled after Amazonrsquos EBS (Elastic Block Store) mass store It provides redundant scalable data storage using clusters of inexpensive commodity servers and hard drives to store massive amounts of data It is not a file system or a database system Rather it is intended for long-term storage of large amounts of data (blobs) Its use of a distributed architecture with no central point of control provides great scalability redundancy and permanence

continued on page 5 OpenStack Image Service

OpenStackStorage

create petabytes ofsecure reliable storageusing commodity hardware

OpenStackImage

catalog and managelibraries of images -

server images web pagesbackups email

snapshot imagesof compute nodes

store imagesnapshots

OpenStack Cloud

OpenStack Computeprovision and manage

large networks ofvirtual machines

HYPERVISOR

VM VMVM

host

HYPERVISOR

VM VMVM

host

hypervisor

VM VMVM

host

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 8: Connect Converge Spring 2016

5

OpenStack Image Service is a retrieval system for virtual- machine images It provides registration discovery and delivery services for these images It can use OpenStack Storage or Amazon S3 (Simple Storage System) for storage of virtual-machine images and their associated metadata It provides a standard web RESTful interface for querying information about stored virtual images

The Demise of the Helion Public Cloud After announcing its public cloud HP realized that it could not compete with the giants of the industry Amazon AWS and Microsoft Azure in the public-cloud space Therefore HP (now HPE) sunsetted its Helion public cloud program in January 2016

However HPE continues to promote its private and hybrid clouds by helping customers build cloud-based applications based on HPE Helion OpenStack and the HPE Helion Development Platform It provides interoperability and cloud bursting with Amazon AWS and Microsoft Azure

HPE has been practical in terminating its public cloud program by the purchase of Eucalyptus to provide ease of integration with Amazon AWS Investment in the development of the open-source OpenStack model is protected and remains a robust and solid approach for the building testing and deployment of cloud solutions The result is protection of existing investment and a clear path to the future for the continued and increasing use of the OpenStack model

Furthermore HPE supports customers who want to run HPErsquos Cloud Foundry platform for development in their own private clouds or in large-scale public clouds such as AWS or Azure

The Helion Private Cloud ndash The HPE Helion CloudSystem Building a custom private cloud to support an organizationrsquos native cloud applications can be a complex project that takes months to complete This is too long a

time if immediate needs must be addressed The Helion CloudSystem reduces deployment time to days and avoids the high cost of building a proprietary private cloud system

The HPE Helion CloudSystem was announced in March 2015 It is a secure private cloud delivered as a preconfigured and integrated infrastructure The infrastructure called the HPE Helion Rack is an OpenStack private-cloud computing system ready for deployment and management It comprises a minimum of eight HP ProLiant physical servers to provide performance and availability The servers run a hardened version of Linux hLinux optimized to support Helion Additional servers can be added as baremetal servers or as virtual servers running on the KVM hypervisor

The Helion CloudSystem is fully integrated with the HP Helion Development Platform Since the Helion CloudSystem is based on the open-source OpenStack cloud there is no vendor lock-inHPrsquos white paper ldquoHP Helion Rack solution architecturerdquo1 is an excellent guide to the Helion CloudSystem 1 HP Helion Rack solution architecture HP White Paper 2015

ADVOCACYThe HPE Helion Private Cloud and Cloud Broker Services

continued from page 4

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 9: Connect Converge Spring 2016

6

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 10: Connect Converge Spring 2016

7

Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years Hersquos been a VMware vExpert for 5 years As an early adopter of social media and active in communities he has blogged for 7 years

You can find his blog at hpcomstorageblog

He started his ldquosocial personardquo as HPStorageGuy and after the HP separation manages an active community of storage fans on Twitter as CalvinZito

You can also contact him via email at calvinzitohpcom

Let Me Help You With Hyper-ConvergedCalvin Zito

HPE Blogger

Storage Evangelist

CALVIN ZITO

If yoursquore considering hyper-converged infrastructure I want to help you with a few papers and videos that will prepare you to ask the right questions After all over the last couple of years wersquove had a lot of posts here on the blog talking about software-defined storage and hyper-converged and we started SDS Saturday to cover the topic Wersquove even had software-defined storage in our tool belt for more than seven years but hyper-converged is a relatively new technology

It starts with software defined storage The move to hyper-converged was enabled by software defined storage (SDS) Hyper-converged combines compute and storage in a single platform and SDS was a requirement Hyper-converged is a deployment option for SDS I just did a ChalkTalk that gives an overview of SDS and talks about the deployment options

Top 10 things you need to consider when buying a hyper-converged infrastructure To achieve the best possible outcomes from your investment ask the tough questions of your vendor to make sure that they can meet your needs in a way that helps you better support your business Check out Top 10 things you need to consider when buying a hyper-converged infrastructure

Survey says Hyper-convergence is growing in popularity even as people are struggling to figure out what it can do what it canrsquot do and how it impacts the organization ActualTech Media conducted a survey that taps into more than 500 IT technology professionals from companies of all sizes across 40 different industries and countries The goal was to learn about peoplersquos existing datacenter challenges how they feel about emerging technology like hyper-converged infrastructure and software defined storage and to discover perceptions particularly as it pertains to VDI and ROBO deployments

Here are links so you can see what the survey says

bull First the executive summary of the research

bull Next the survey results on datacenter challenges hyper-converged infrastructure and software-defined storage This requires registration

One more this focuses on use cases including Virtual Desktop Infrastructure Remote-Office Branch-Office and Public amp Private Cloud Again this one requires registration

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 11: Connect Converge Spring 2016

8

What others are saying Herersquos a customer Sonora Quest talking about its use of hyper-converged for virtual desktop infrastructure and the benefits they are seeing VIDEO HERE

The City of Los Angeles also has adopted HPE Hyper-Converged I love the part where the customer talks about a 30 improvement in performance and says itrsquos ldquoexactly what we neededrdquo VIDEO HERE

Get more on HPE Hyper-Converged solutions The storage behind our hyper-converged solutions is software-defined StoreVirtual VSA HPE was doing software- defined storage before it was cool Whatrsquos great is you can get access to a free 1TB VSA download

Go to hpecomstorageTryVSA and check out the storage that is inside our hyper-converged solutions

Lastly herersquos a ChalkTalk I did with a really good overview of the Hyper Converged 250 VIDEO HERE

Learn about about HPE Software-Defined Storage solutions Learn more about HPE Hyper-Converged solutions

November 13-16 2016Fairmont San Jose HotelSan Jose CA

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 12: Connect Converge Spring 2016

9

Chris Purcell has 28+ years of experience working with technology within the datacenter Currently focused on integrated systems (server storage and networking which come wrapped with a complete set of services)

You can find Chris on Twitter as Chrispman01 Check out his contribution to the HP CI blog at wwwhpcomgociblog

Composable Infrastructure Breakthrough To Fast Fluid IT

Chris Purcell

gtgtTOP THINKING

You donrsquot have to look far to find signs that forward-thinking IT leaders are seeking ways to make infrastructure more adaptable less rigid less constrained by physical factors ndash in short make infrastructure behave more like software You see it in the rise of DevOps and the search for ways to automate application deployment and updates as well as ways to accelerate development of the new breed of applications and services You see it in the growing interest in disaggregation ndash the decouplingof the key components of compute into fluid pools of resources to where IT can make better use of their infrastructure

In another recent blog Gear up for the idea economy with Composable Infrastructure one of the things thatrsquos needed to build this more flexible data center is a way to turn hardware assets into fluid pools of compute storage and fabric resources

The many virtues of disaggregation You can achieve significant efficiencies in the data center by disaggregating the components of servers so theyrsquore abstracted away from the physical boundaries of the box Think of it this way ndash today most organizations are essentially standardizing form factors in an attempt to minimize the number and types of servers But this can lead to inefficiencies you may have one application that needs a lot of disk and not much CPU and another that needs a lot of CPU and not a lot of disk By the nature of standardization your choices are limited by form factors basically you have to choose small medium or large So you may end up buying two large boxes even though some of the resources will be excess to the needs of the applications

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 13: Connect Converge Spring 2016

UPCOMING EVENTS

MENUG

4102016 Riyadh 4122016 Doha 4142016 Dubai

GTUG Connect Germany IT

Symposium 2016 4182016 Berlin

HP-UX Boot Camp 424-262016 Rosemont Illinois

N2TUG Chapter Meeting 552016 Plano Texas

BITUG BIG SIG 5122016 London

HPE NonStop Partner Technical Symposium

5242016 Palo Alto California

Discover Las Vegas 2016

57-92016 Las Vegas

But now imagine if you could assemble those stranded or unused assets into pools of resources that are easily available for applications that arenrsquot running on that physical server And imagine if you could leverage software intelligence that reaches into those pools and pulls together the resources into a single optimized footprint for your applications Add to that a unified API that delivers full infrastructure programmability so that provisioning and updates are accomplished in a matter of minutes Now you can eliminate overprovisioning and silos and hugely increase your ability to scale smoothly andeasily Infrastructure management is simplified and the ability to make changes rapidly and with minimum friction reduces downtime You donrsquot have to buy new infrastructure to accommodate an imbalance in resources so you can optimize CAPEX And yoursquove achieved OPEX savings too because your operations become much more efficient and yoursquore not spending as much on power and cooling for unused assets

An infrastructure for both IT worlds This is exactly what Composable Infrastructure does HPE recently announced a big step forward in the drive towards a more fluid software-defined hyper-efficient datacenter HPE Synergy is the first platform built from the ground up for Composable Infrastructure Itrsquos a single infrastructure that composes physical and virtual compute storage and fabric pools into any configuration for any application

HPE Synergy simplifies ops for traditional workloads and at the same time accelerates IT for the new breed of applications and services By doing so it enables IT to bridge the gap between the traditional ops-driven and cost-focused ways of doing business and the apps-driven agility-focused IT that companies need to thrive in the Idea Economy

You can read more about how to do that here HPE Composable Infrastructure ndash Bridging Traditional IT with the Idea Economy

And herersquos where you can learn how Composable Infrastructure can help you achieve the speed and agility of cloud giants

Hewlett Packard Enterprise Technology User Group

10

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 14: Connect Converge Spring 2016

11

Fast analytics enables businesses of all sizes to generate insights As you enter a department store a sales clerk approaches offering to direct you to newly stocked items that are similar in size and style to your recent purchasesmdashand almost instantaneously you receive coupons on your mobile device related to those items These days many people donrsquot give a second thought to such interactions accustomed as wersquove become to receiving coupons and special offers on our smartphones in near real time

Until quite recently only the largest organizations that were specifically designed to leverage Big Data architectures could operate on this scale It required too much expertise and investment to get a Big Data infrastructure up and running to support such a campaign

Today we have ldquoapproachablerdquo analytics analytics-as-a-service and hardened architectures that are almost turnkeymdashwith back-end hardware database support and applicationsmdashall integrating seamlessly As a result the business user on the front end is able to interact with the data and achieve insights with very little overhead Data can therefore have a direct impact on business results for both small and large organizations

Real-time analytics for all When organizations try to do more with data analytics to benefit their business they have to take into consideration the technology skills and culture that exist in their company

Dasher Technologies provides a set of solutions that can help people address these issues ldquoWe started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people process and technology mantrardquo says Chris Saso senior VP of technology at Dasher Technologies ldquoaddressing peoplersquos scale-out server storage and networking types of problems Over the past five or six years wersquove been spending our energy strategy and time on the big areas around mobility security and of course Big Datardquo

Democratizing Big Data ValueDana Gardner Principal Analyst Interarbor Solutions

BIG DATA

Analyst Dana Gardner hosts conversations with the doers and innovatorsmdashdata scientists developers IT operations managers chief information security officers and startup foundersmdashwho use technology to improve the way we live work and play View an archive of his regular podcasts

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 15: Connect Converge Spring 2016

12

ldquoData analytics is nothing newrdquo says Justin Harrigan data architecture strategist at Dasher Technologies ldquoWersquove been doing it for more than 50 years with databases Itrsquos just a matter of how big you can get how much data you can put in one spot and then run some sort of query against it and get a timely report that doesnrsquot take a week to come back or that doesnrsquot time out on a traditional databaserdquo

ldquoAlmost every company nowadays is growing so rapidly with the type of data they haverdquo adds Saso ldquoIt doesnrsquot matter if yoursquore an architecture firm a marketing company or a large enterprise getting information from all your smaller remote sitesmdasheveryone is compiling data to [generate] better business decisions or create a system that makes their products run fasterrdquo

There are now many options available to people just starting out with using larger data set analytics Online providers for example can scale up a database in a matter of minutes ldquoItrsquos much more approachablerdquo says Saso ldquoThere are many different flavors and formats to start with and people are realizing thatrdquo

ldquoWith Big Data you think large data sets but you [also have] speed and agilityrdquo adds Harrigan ldquoThe ability to have real-time analytics is something thatrsquos becoming more prevalent as is the ability to not just run a batch process for 18 hours on petabytes of data but have a chart or a graph or some sort of report in real time Interacting with it and making decisions on the spot is becoming mainstreamrdquo

This often involves online transaction processing (OLTP) data that needs to run in memory or on hardware thatrsquos extremely fast to create a data stream that can ingest all the different information thatrsquos coming in

A retail case study Retail is one industry that is benefiting from approachable analytics For example mobile devices can now act as sensors because they constantly ping access points over Wi-Fi Retailers can capture that data and by using a MAC address as a unique identifier follow someone as they move through a store Then when that person returns to the store a clerk can call up their historical data that was captured on the previous visit

ldquoWhen people are using a mobile device theyrsquore creating data that through apps can be shared back to a carrier as well as to application hosts and the application writersrdquo says Dana Gardner principal analyst for Interarbor Solutions and host of the Briefings Direct podcast ldquoSo we have streams of data now about user experience and activities We also can deliver data and insights out to people in the other direction in real time regardless of where they are They donrsquot have to be at their deskmdashthey donrsquot have to be looking at a specific business intelligence application for examplerdquo

If you give that data to a clerk in a store that person can benefit by understanding where in the store to put jeans to impact sales Rather than working from a quarterly report with information thatrsquos outdated for the season sales clerks can make changes the same day they receive the data as well as see what other sites are doing This opens up a new world of opportunities in terms of the way retailers place merchandise staff stores and gauge the impact of weather

Cloud vs on-premises Organizations need to decide whether to perform data analytics on-premisesmdasheither virtualized or installed directly on the hard disk (ie ldquobare metalrdquo)mdashor by using a cloud as-a-service model Companies need to do a costndashbenefit analysis to determine the answer Over time many organizations expect to have a hybrid capability moving back and forth between both models

Itrsquos almost an either-or decision at this time Harrigan believes ldquoI donrsquot know what it will look like in the futurerdquo he says ldquoWorkloads that lend themselves extremely well to the cloud are inconsistent maybe seasonal where 90 percent of your business happens in Decemberrdquo

Cloud can also work well if your business is just starting out he adds and you donrsquot know if yoursquore going to need a full 400-node cluster to run your analytics platform

Companies that benefit from on-premises data architecture are those that can realize significant savings by not using cloud and paying someone else to run their environment Those companies typically try to maximize CPU usage and then add nodes to increase capacity

ldquoThe best advice I could give is whether you start in the cloud or on bare metal make sure you have agility and yoursquore able to move workloads aroundrdquo says Harrigan ldquoIf you choose one sort of architecture that only works in the cloud and you are scaling up and have to do a rip-and-replace scenario just to get out of the cloud and move to on-premises thatrsquos going to have a significant business impactrdquo

More Listen to the podcast of Dana Gardnerrsquos interview on fast analytics with Justin Harrigan and Chris Saso of Dasher Technologies

Read more on tackling big data analytics Learn how the future is all about fast data Find out how big data trends affect your business

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 16: Connect Converge Spring 2016

13

STEVE TCHERCHIAN CISO amp Product Manager XYGATE SecurityOne XYPRO Technology

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 17: Connect Converge Spring 2016

14

Y ears ago I was one of three people in a startup company providing design and development services for web hosting and online message boards We started the company

on a dining room table As we expanded into the living room we quickly realized that it was getting too cramped and we needed more space to let our creative juices flow plus we needed to find a way to stop being at each otherrsquos throats We decided to pack up our laptops move into a co-working space in Venice California We were one of four other companies using the space and sharing the rent It was quite a nice setup and we were enjoying the digs We were eager to get to work in the morning and wouldnrsquot leave sometimes till very late in the evening

One Thursday morning as we pulled up to the office to start the day we noticed the door wide open Someone had broken into the office in the middle of the night and stolen all of our equipment laptops computers etc This was before the time of cloud computing so data backup at that time was mainly burning CDs which often times we would forget to do or just not do it because ldquowe were just too busyrdquo After the theft we figured we would purchase new laptops and recover from the latest available backups As we tried to restore our data none of the processes were going as planned Either the data was corrupted or the CD was completely blank or too old to be of any value Within a couple of months we bit the bullet and had no choice but to close up shop

continued on page 15

S t e v e Tc h e rc h i a n C ISSP PCI - ISA P C I P i s t h e C I S O a n d S e c u r i t y O n e Product Manager for XYPRO Technology Steve is on the ISSA CISO Advisory Board and a member of the ANSI X9 Security S t a n d a rd s C o m m i t te e W i t h a l m o st 20 years in the cybersecurity field Steve is responsible for XYPROrsquos new security product line as well as overseeing XYPROrsquos r isk compl iance in f rastructure and product secur i ty to ensure the best security experience to customers in the Mission-Critical computing marketplace

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 18: Connect Converge Spring 2016

15

How to Survive the Zombie Apocalypse (and Other Disasters) with Business Continuity and Security Planning cont

BY THE NUMBERSBusiness interruptions come in all shapes and sizes From natura l d isasters cyber secur i ty inc idents system failures human error operational activities theft power outageshellipthe l ist goes on and on In todayrsquos landscape the lack of business continuity planning not only puts companies at a competit ive disadvantage but can spell doom for the company as a whole Studies show that a single hour of downtime can cost a smal l business upwards of $8000 For large enterprises that number skyrockets to millions Thatrsquos 6 zeros folks Compound that by the fact that 50 of system outages can last 24 hours or longer and wersquore talking about scarily large figures

The impact of not having a business continuity plan doesn rsquo t s top there As i f those numbers weren rsquo t staggering enough a study done by the AXA insurance group showed 80 of businesses that suffered a major outage f i led for bankruptcy within 18 months with 40 percent of them out of business in the first year Needless to say business continuity planning (BCP) and disaster recovery (DR) are critical components and lack of planning in these areas can pose a serious risk to any modern organization

We can talk numbers all day long about why BCP and DR are needed but the bottom l ine is ndash THEY ARE NEEDED Frameworks such as NIST Special Publication 800-53 Rev4 800-34 and ISO 22301 de f ine an organ izat ion rsquos ldquocapab i l i ty to cont inue to de l i ver its products and services at acceptable predefined levels after disruptive incidents have occurred They p ro v i d e m u c h n e e d e d g u i d a n c e o n t h e t y p e s o f activities to consider when formulating a BCP They c a n a s s i s t o rg a n i z a t i o n s i n e n s u r i n g b u s i n e s s cont inu i ty and d isaster recovery systems w i l l be there available and uncompromised when required

DISASTER RECOVERY DONrsquoT LOSE SIGHT OF SECURITY amp RISKOnce established business continuity and disaster recovery strategies carry their own layer of complexities that need to be proper ly addressed A successfu l imp lement at ion o f any d i saster recovery p lan i s cont ingent upon the e f fec t i veness o f i t s des ign The company needs access to the data and applications required to keep the company running but unauthorized access must be prevented

Security and privacy considerations must be

included in any disaster recovery planning

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 19: Connect Converge Spring 2016

16

Security and risk are top priority at every organization yet tradit ional disaster recovery procedures focus on recovery f rom an admin is t rat i ve perspect i ve What to do to ensure crit ical business systems and applications are kept online This includes infrastructure staf f connect iv i ty logist ics and data restorat ion Oftentimes security is overlooked and infrastructure designated as disaster recovery are looked at and treated as secondary infrastructure and as such the need to properly secure (and budget) for them is also treated as secondary to the production systems Compan ies i nvest heav i l y i n resources secur i t y hardware so f tware too ls and other so lut ions to protect their production systems Typical ly only a subset of those security solutions are deployed if at all to their disaster recovery systems

The type of DR security thatrsquos right for an organization is based on need and risk Identifying and understanding what the real r isks are can help focus ef forts and close gaps A lot of people simply look at the perimeter and the highly vis ible systems Meanwhile theyrsquove got other systems and back doors where theyrsquore exposed potentially leaking data and wide open for attack In a recent ar t ic le Barry Forbes XYPRO rsquos VP of Sales and Marketing discusses how senior executives at a to p f i ve U S B a n k i n d i c ate d t h at t h e y w o u l d prefer exper iencing downt ime than deal ing with a b r e a c h T h e l a s t t h i n g yo u w a n t t o d e a l w i t h during disaster recovery is being hit with a double whammy of a security breach Not having equivalent security solutions and active monitoring for disaster recovery systems puts your ent ire cont inuity plan and disaster recovery in jeopardy This opens up a large exploitable gap for a savvy attacker or malicious insider Attackers know all the security eyes are focused on product ion systems and data yet the DR systems whose purpose is to become production systems in case of disaster are taking a back seat and ripe for the picking

Not surprisingly the industry is seeing an increasing number of breaches on backup and disaster recovery systems Compromising an unpatched or an improperly secured system is much easier through a DR s i te Attackers know that part of any good business continuity plan is to execute the plan on a consistent basis This typical ly includes restor ing l ive data onto backup or DR systems and ensuring appl icat ions continue to run and the business continues to operate But if the disaster recovery system was not monitored or secured s imi lar to the l ive system using s imi lar controls and security solutions the integrity of the system the data was just restored to is in question That dat a may have very we l l been restored to a

compromised system that was lying in wait No one w a n t s to i s s u e o u t a g e n o t i f i c a t i o n s c o u p l e w i t h a breach notification

The security considerations donrsquot end there Once the DR test has checked out and the compl iance box t i cked for a work ing DR system and success fu l l y executed plan attackers and malicious insiders know that the data restored to a DR system can be much easier to gain access to and difficult to detect activity on Therefore identical security controls and inclusion of DR systems into act ive monitor ing is not just a nice to have but an absolutely necessity

COMPLIANCE amp DISASTER RECOVERYOrganizations working in highly regulated industries need to be aware that security mandates arenrsquot waived in times of disaster Compliance requirements are stil l very much applicable during an earthquake hurricane or data loss

In fact the HIPAA Secur i ty Rule speci f ica l ly ca l ls out the need for maintaining security in an outage s i tuat ion Sect ion 164 308(a) (7) ( i i ) (C) requ i res the implementation as needed of procedures to enable cont inuat ion o f p rocesses fo r ldquoprotec t ion o f the security of electronic protected health information whi le operat ing in emergency moderdquo The SOX Act is just as stringent laying out a set of fines and other punishment for failure to comply with requirements e v e n a t t i m e s o f d i s a s t e r S e c t i o n 4 0 4 o f S O X d iscusses establ ish ing and mainta in ing adequate internal control structures Disaster recovery situations are not excluded

It rsquos also di f f icult to imagine the PCI Data Security S t a n d a rd s C o m m i t te e re l a x i n g i t s re q u i re m e n t s on cardholder data protection for the duration a card processing application is running on a disaster recovery system Itrsquos just not going to happen

CONCLUSIONNeglecting to implement proper and thorough security into disaster recovery planning can make an already c r i t i c a l s i tu a t i o n s p i ra l o u t o f c o n t ro l C a re f u l considerat ion of disaster recovery planning in the areas of host configuration defense authentication and proact ive monitor ing wi l l ensure the integrity of your DR systems and effectively prepare for recovery operat ions whi le keeping security at the forefront and keep your business running Most importantly ensure your disaster recovery systems are secured at the same level and have the same solutions and controls as your production systems

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 20: Connect Converge Spring 2016

17

OverviewW h e n d e p l oy i n g e n c r y pt i o n a p p l i c a t i o n s t h e l o n g - te rm ma intenance and protect ion o f the encrypt ion keys need to be a crit ical consideration Cryptography i s a we l l -proven method for protect ing data and as s u c h i s o f t e n m a n d a t e d i n r e g u l a t o r y c o m p l i a n c e ru les as re l i ab le cont ro l s over sens i t ive data us ing wel l-establ ished algorithms and methods

However too o ften not as much at tent ion i s p laced on the social engineering and safeguarding of maintaining re l i a b l e a c c e s s to k ey s I f yo u l o s e a c c e s s to k ey s you by extension lose access to the data that can no longer be decrypted With this in mind i t rsquos important to consider various approaches when deploying encryption with secure key management that ensure an appropriate level of assurance for long-term key access and recovery that is reliable and effective throughout the information l i fecycle of use

Key management deployment architecturesWhether through manua l p rocedure s o r automated a complete encrypt ion and secure key management system inc ludes the encrypt ion endpo ints (dev ices applications etc) key generation and archiving system key backup pol icy-based controls logging and audit fac i l i t i es and best-pract i ce procedures for re l i ab le operations Based on this scope required for maintaining reliable ongoing operations key management deployments need to match the organizat ional structure secur i ty assurance leve ls fo r r i sk to le rance and operat iona l ease that impacts ongoing t ime and cost

Local key managementKey management that is distributed in an organization where keys coexist within an individual encryption application or device is a local-level solution When highly dispersed organizations are responsible for only a few keys and applications and no system-wide policy needs to be enforced this can be a simple approach Typically local users are responsible f o r t h e i r o w n a d h o c k ey m a n a g e m e n t p ro c e d u re s where other administrators or auditors across an organization do not need access to controls or act ivity logging

Managing a key lifecycle locally will typically include manual operations to generate keys distribute or import them to applications and archive or vault keys for long-term recoverymdashand as necessary delete those keys Al l of these operations tend to take place at a specif ic data center where no outside support is required or expected This creates higher risk if local teams do not maintain ongoing expertise or systematic procedures for managing contro ls over t ime When loca l keys are managed ad hoc reliable key protection and recovery become a greater risk

Although local key management can have advantages in its perceived simplicity without the need for central operat ional overhead i t is weak on dependabi l i ty In the event that access to a local key is lost or mishandled no central backup or audit trail can assist in the recovery process

Fundamentally risky if no redundancy or automation exist

Local key management has potential to improve security i f there are no needs for control and audit of keys as part of broader enterprise security policy management That is i t avoids wide access exposure that through negligence or malicious intent could compromise keys or logs that are administered locally Essentially maintaining a local key management practice can minimize external risks to undermine local encryption and key management l i fecycle operations

Local remote and centrally unified key management

HPE Enterprise Secure Key Manager solut ions

Key management for encryption applications creates manageability risks when security controls and operational concerns are not fully realized Various approaches to managing keys are discussed with the impact toward supporting enterprise policy

Figure 1 Local key management over a local network where keys are stored with the encrypted storage

Nathan Turajski

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 21: Connect Converge Spring 2016

18

However deploying the entire key management system in one location without benefit of geographically dispersed backup or central ized controls can add higher r isk to operational continuity For example placing the encrypted data the key archive and a key backup in the same proximity is risky in the event a site is attacked or disaster hits Moreover encrypted data is easier to attack when keys are co-located with the targeted applicationsmdashthe analogy being locking your front door but placing keys under a doormat or leaving keys in the car ignition instead of your pocket

While local key management could potentially be easier to implement over centralized approaches economies of scale will be limited as applications expand as each local key management solution requires its own resources and procedures to maintain reliably within unique silos As local approaches tend to require manual administration the keys are at higher risk of abuse or loss as organizations evolve over time especially when administrators change roles compared with maintenance by a centralized team of security experts As local-level encryption and secure key management applications begin to scale over time organizations will find the cost and management simplicity originally assumed now becoming more complex making audit and consistent controls unreliable Organizations with limited IT resources that are oversubscribed will need to solve new operational risks

Pros bull May improve security through obscurity and isolation from

a broader organization that could add access control risks bull Can be cost effective if kept simple with a limited number

of applications that are easy to manage with only a few keysCons bull Co-located keys with the encrypted data provides easier

access if systems are stolen or compromised bull Often implemented via manual procedures over key

lifecyclesmdashprone to error neglect and misuse bull Places ldquoall eggs in a basketrdquo for key archives and data

without benefit of remote backups or audit logs bull May lack local security skills creates higher risk as IT

teams are multitasked or leave the organization bull Less reliable audits with unclear user privileges and a

lack of central log consolidation driving up audit costs and remediation expenses long-term

bull Data mobility hurdlesmdashmedia moved between locations requires key management to be moved also

bull Does not benefit from a single central policy-enforced auditing efficiencies or unified controls for achieving economies and scalability

Remote key managementKey management where application encryption takes place in one physical location while keys are managed and protected in another allows for remote operations which can help lower risks As illustrated in the local approach there is vulnerability from co-locating keys with encrypted data if a site is compromised due to attack misuse or disaster

Remote administration enables encryption keys to be controlled without management being co-located with the application such as a console UI via secure IP networks This is ideal for dark data centers or hosted services that are not easily accessible andor widely distributed locations where applications need to deploy across a regionally dispersed environment

Provides higher assurance security by separating keys from the encrypted dataWhile remote management doesnrsquot necessarily introduce automation it does address local attack threat vectors and key availability risks through remote key protection backups and logging flexibility The ability to manage controls remotely can improve response time during manual key administration in the event encrypted devices are compromised in high-risk locations For example a stolen storage device that requests a key at boot-up could have the key remotely located and destroyed along with audit log verification to demonstrate compliance with data privacy regulations for revoking access to data Maintaining remote controls can also enable a quicker path to safe harbor where a breach wonrsquot require reporting if proof of access control can be demonstrated

As a current high-profile example of remote and secure key management success the concept of ldquobring your own encryption keyrdquo is being employed with cloud service providers enabling tenants to take advantage of co-located encryption applications

Figure 2 Remote key management separates encrypt ion key management from the encrypted data

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 22: Connect Converge Spring 2016

19

without worry of keys being compromised within a shared environment Cloud users maintain control of their keys and can revoke them for application use at any time while also being free to migrate applications between various data centers In this way the economies of cloud flexibility and scalability are enabled at a lower risk

While application keys are no longer co-located with data locally encryption controls are still managed in silos without the need to co-locate all enterprise keys centrally Although economies of scale are not improved this approach can have similar simplicity as local methods while also suffering from a similar dependence on manual procedures

Pros bull Provides the lowered-risk advantage of not co-locating keys

backups and encrypted data in the same location which makes the system more vulnerable to compromise

bull Similar to local key management remote management may improve security through isolation if keys are still managed in discrete application silos

bull Cost effective when kept simplemdashsimilar to local approaches but managed over secured networks from virtually any location where security expertise is maintained

bull Easier to control and audit without having to physically attend to each distributed system or applications which can be time consuming and costly

bull Improves data mobilitymdashif encryption devices move key management systems can remain in their same place operationally

Cons bull Manual procedures donrsquot improve security if still not part of

a systematic key management approach bull No economies of scale if keys and logs continue to be

managed only within a silo for individual encryption applications

Centralized key managementThe idea of a centralized unifiedmdashor commonly an enterprise secure key managementmdash system is often a misunderstood definition Not every administrative aspect needs to occur in a single centralized location rather the term refers to an ability to centrally coordinate operations across an entire key lifecycle by maintaining a single pane of glass for controls Coordinating encrypted applications in a systematic approach creates a more reliable set of procedures to ensure what authorized devices can access keys and who can administer key lifecycle policies comprehensively

A central ized approach reduces the risk of keys being compromised locally along with encrypted data by relying on higher-assurance automated management systems As a best practice a hardware-based tamper-evident key vault and policylogging tools are deployed in clusters redundantly for high availability spread across multiple geographic locations to create replicated backups for keys policies and configuration data

Higher assurance key protection combined with reliable security automation A higher risk is assumed if relying upon manual procedures to manage keys Whereas a centralized solution runs the risk of creating toxic combinations of access controls if users are over-privileged to manage enterprise keys or applications are not properly authorized to store and retrieve keys

Realizing these critical concerns centralized and secure key management systems are designed to coordinate enterprise-wide environments of encryption applications keys and administrative users using automated controls that follow security best practices Unlike distributed key management systems that may operate locally centralized key management can achieve better economies with the high-assurance security of hardened appliances that enforce policies with reliability while ensuring that activity logging is tracked consistently for auditing purposes and alerts and reporting are more efficiently distributed and escalated when necessary

Pros bull Similar to remote administration economies of scale

achieved by enforcing controls across large estates of mixed applications from any location with the added benefit of centralized management economies

bull Coordinated partitioning of applications keys and users to improve on the benefit of local management

bull Automation and consistency of key lifecycle procedures universally enforced to remove the risk of manual administration practices and errors

bull Typically managed over secured networks from any location to serve global encryption deployments

bull Easier to control and audit with a ldquosingle pane of glassrdquo view to enforce controls and accelerate auditing

bull Improves data mobilitymdashkey management system remains centrally coordinated with high availability

bull Economies of scale and reusability as more applications take advantage of a single universal system

Cons bull Key management appliances carry higher upfront costs for a

single application but do enable future reusability to improve total cost of ownership (TCO)return on investment (ROI) over time with consistent policy and removing redundancies

bull If access controls are not managed properly toxic combinations of users are over-privileged to compromise the systemmdashbest practices can minimize risks

Figure 4 Central key management over wide area networks enables a single set of reliable controls and auditing over keys

Local remote and centrally unified key management continued

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 23: Connect Converge Spring 2016

20

Best practicesmdashadopting a flexible strategic approachIn real world practice local remote and centralized key management can coexist within larger enterprise environments driven by the needs of diverse applications deployed across multiple data centers While a centralized solution may apply globally there may also be scenarios where localized solutions require isolation for mandated reasons (eg government regulations or weak geographic connectivity) application sensitivity level or organizational structure where resources operations and expertise are best to remain in a center of excellence

In an enterprise-class centralized and secure key management solution a cluster of key management servers may be distributed globally while synchronizing keys and configuration data for failover Administrators can connect to appliances from anywhere globally to enforce policies with a single set of controls to manage and a single point for auditing security and performance of the distributed system

Considerations for deploying a centralized enterprise key management systemEnterprise secure key management solutions that offer the flexibility of local remote and centralized controls over keys will include a number of defining characteristics Itrsquos important to consider the aspects that will help match the right solution to an application environment for best long-term reusability and ROImdashrelative to cost administrative flexibility and security assurance levels provided

Hardware or software assurance Key management servers deployed as appliances virtual appliances or software will protect keys to varying degrees of reliability FIPS 140-2 is the standard to measure security assurance levels A hardened hardware-based appliance solution will be validated to level 2 or above for tamper evidence and response capabilities

Standards-based or proprietary The OASIS Key Management Interoperability Protocol (KMIP) standard allows servers and encrypted applications to communicate for key operations Ideally key managers can fully support current KMIP specifications to enable the widest application range increasing ROI under a single system Policy model Key lifecycle controls should follow NIST SP800-57 recommendations as a best practice This includes key management systems enforcing user and application access

policies depending on the state in a lifecycle of a particular key or set of keys along with a complete tamper-proof audit trail for control attestation Partitioning and user separation To avoid applications and users having over-privileged access to keys or controls centralized key management systems need to be able to group applications according to enterprise policy and to offer flexibility when defining user roles to specific responsibilities High availability For business continuity key managers need to offer clustering and backup capabilities for key vaults and configurations for failover and disaster recovery At a minimum two key management servers replicating data over a geographically dispersed network and or a server with automated backups are required Scalability As applications scale and new applications are enrolled to a central key management system keys application connectivity and administrators need to scale with the system An enterprise-class key manager can elegantly handle thousands of endpoint applications and millions of keys for greater economies Logging Auditors require a single pane of glass view into operations and IT needs to monitor performance and availability Activity logging with a single view helps accelerate audits across a globally distributed environment Integration with enterprise systems via SNMP syslog email alerts and similar methods help ensure IT visibility Enterprise integration As key management is one part of a wider security strategy a balance is needed between maintaining secure controls and wider exposure to enterprise IT systems for ease o f use Externa l authent i cat ion and authorization such as Lightweight Directory Access Protocol (LDAP) or security information and event management (SIEM) for monitoring helps coordinate with enterprise policy and procedures

ConclusionsAs enterprises mature in complexity by adopting encryption across a greater portion of their critical IT infrastructure the need to move beyond local key management towards an enterprise strategy becomes more apparent Achieving economies of scale with a single-pane-of-glass view into controls and auditing can help accelerate policy enforcement and control attestation

Centralized and secure key management enables enterprises to locate keys and their administration within a security center of excellence while not compromising the integrity of a distributed application environment The best of all worlds can be achieved with an enterprise strategy that coordinates applications keys and users with a reliable set of controls

Figure 5 Clustering key management enables endpoints to connect to local key servers a primary data center andor disaster recovery locations depending on high availability needs and global distribution of encryption applications

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 24: Connect Converge Spring 2016

21

As more applications start to embed encryption capabilities natively and connectivity standards such as KMIP become more widely adopted enterprises will benefit from an enterprise secure key management system that automates security best practices and achieves greater ROI as additional applications are enrolled into a unified key management system

HPE Data Security TechnologiesHPE Enterprise Secure Key ManagerOur HPE enterprise data protection vision includes protecting sensitive data wherever it lives and moves in the enterprise from servers to storage and cloud services It includes HPE Enterprise Secure Key Manager (ESKM) a complete solution for generating and managing keys by unifying and automating encryption controls With it you can securely serve control and audit access to encryption keys while enjoying enterprise- class security scalability reliability and high availability that maintains business continuity

Standard HPE ESKM capabilities include high availability clustering and failover identity and access management for administrators and encryption devices secure backup and recovery a local certificate authority and a secure audit logging facility for policy compliance validation Together with HPE Secure Encryption for protecting data-at-rest ESKM will help you meet the highest government and industry standards for security interoperability and auditability

Reliable security across the global enterpriseESKM scales easily to support large enterprise deployment of HPE Secure Encryption across multiple geographically distributed data centers tens of thousands of encryption clients and millions of keys

The HPE data encryption and key management portfol io uses ESKM to manage encryption for servers and storage including

bull HPE Smart Array Control lers for HPE ProLiant servers

bull HPE NonStop Volume Level Encryption (VLE) for disk v irtual tape and tape storage

bull HPE Storage solut ions including al l StoreEver encrypting tape l ibrar ies the HPE XP7 Storage Array and HPE 3PAR

With cert i f ied compliance and support for the OASIS KMIP standard ESKM also supports non- HPE storage server and partner solut ions that comply with the KMIP standard This al lows you to access the broad HPE data security portfol io whi le support ing heterogeneous infrastructure and avoiding vendor lock-in

Benefits beyond security

When you encrypt data and adopt the HPE ESKM unified key management approach with strong access controls that del iver re l iable secur i ty you ensure cont inuous and appropriate avai labi l i ty to keys whi le support ing audit and compliance requirements You reduce administrative costs human error exposure to policy compliance failures and the risk of data breaches and business interruptions And you can also minimize dependence on costly media sanit izat ion and destruct ion services

Donrsquot wait another minute to take full advantage of the encryption capabilities of your servers and storage Contact your authorized HPE sales representative or vis it our webs i te to f ind out more about our complete l ine of data security solut ions

About HPE SecuritymdashData Security HPE Security - Data Security drives leadership in data-centric security and encryption solutions With over 80 patents and 51 years of expertise we protect the worldrsquos largest brands and neutralize breach impact by securing sensitive data-at- rest in-use and in-motion Our solutions provide advanced encryption tokenization and key management that protect sensitive data across enterprise applications data processing infrastructure cloud payments ecosystems mission-critical transactions storage and Big Data platforms HPE Security - Data Security solves one of the industryrsquos biggest challenges simplifying the protection of sensitive data in even the most complex use cases CLICK HERE TO LEARN MORE

Nathan Turajski Senior Product Manager HPE Nathan Turajski is a Senior Product Manager for

Hewlett Packard Enterprise - Data Security (Atalla)

responsible for enterprise key management solutions

that support HPE storage and server products and

technology partner encryption applications based on

interoperability standards Prior to joining HP Nathanrsquos

background includes over 15 years launching

Silicon Valley data security start-ups in product management and

marketing roles including Securant Technologies (acquired by RSA

Security) Postini (acquired by Google) and NextLabs More recently he

has also lead security product lines at Trend Micro and Thales e-Security

Local remote and centrally unified key management

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 25: Connect Converge Spring 2016

22

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 26: Connect Converge Spring 2016

23

Reinvent Your Business Printing With HP Ashley Brogdon

Although printing is core to communication even in the digital age itrsquos not known for being a rapidly evolving technology Printer models might change incrementally with each release offering faster speeds smaller footprints or better security but from the outside most printers appear to function fundamentally the samemdashclick print and your document slides onto a tray

For years business printing has primarily relied on two types of print technology laser and inkjet Both have proven to be reliable and mainstays of the business printing environment with HP LaserJet delivering high-volume print shop-quality printing and HP OfficeJet Pro using inkjet printing for professional-quality prints at a low cost per page Yet HP is always looking to advance printing technology to help lower costs improve quality and enhance how printing fits in to a businessrsquos broader IT infrastructure

On March 8 HP announced HP PageWide printers and MFPs mdashthe next generation of a technology that is quickly reinventing the way businesses print HP PageWide takes a proven advanced commercial printing technology previously used primarily in printshops and for graphic arts and has scaled it to a new class of printers that offer professional-quality color printing with HPrsquos lowest printing costs and fastest speeds yet Businesses can now turn to three different technologiesmdashlaser inkjet and PageWidemdashto address their printing needs

How HP PageWide Technology is different To understand how HP PageWide Technology sets itself apart itrsquos best to first understand what itrsquos setting itself apart from At a basic level laser printing uses a drum and static electricity to apply toner to paper as it rolls by Inkjet printers place ink droplets on paper as the inkjet cartridge passes back and forth across a page

HP PageWide Technology uses a completely different approach that features a stationary print bar that spans the entire width of a page and prints pages in a single pass More than 40000 tiny nozzles deliver four colors of Original HP pigment ink onto a moving sheet of paper The printhead ejects each drop at a consistent weight speed and direction to place a correct-sized ink dot in the correct location Because the paper moves instead of the printhead the devices are dependable and offer breakthrough print speeds

Additionally HP PageWide Technology uses Original HP pigment inks providing each print with high color saturation and dark crisp text Pigment inks deliver superb output quality are rapid-drying and resist fading water and highlighter smears on a broad range of papers

How HP PageWide Technology fits into the officeHPrsquos printer and MFP portfolio is designed to benefit businesses of all kinds and includes the worldrsquos most preferred printers HP PageWide broadens the ways businesses can reinvent their printing with HP Each type of printingmdashlaser inkjet and now PageWidemdashcan play an essential role and excel in the office in their own way

HP LaserJet printers and MFPs have been the workhorses of business printing for decades and our newest award-winning HP LaserJet printers use Original HP Toner cartridges with JetIntelligence HP JetIntelligence makes it possible for our new line of HP LaserJet printers to print up to 40 faster use up to 53 less energy and have a 40 smaller footprint than previous generations

With HP OfficeJet Pro HP reinvented inkjet for enterprises to offer professional-quality color documents for up to 50 less cost per page than lasers Now HP OfficeJet Pro printers can be found in small work groups and offices helping provide big-business impact for a small-business price

Ashley Brogdon is a member of HP Incrsquos Worldwide Print Marketing Team responsible for awareness of HPIrsquos business printing portfolio of products solutions and services for SMBs and Enterprises Ashley has more than 17 years of high-tech marketing and management experience

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 27: Connect Converge Spring 2016

24

Now with HP PageWide the HP portfolio bridges the printing needs between the small workgroup printing of HP OfficeJet Pro and the high-volume pan-office printing of HP LaserJet PageWide devices are ideal for workgroups of 5 to 15 users printing 2000 to 7500 pages per month who need professional-quality color documentsmdashwithout the wait With HP PageWide businesses you get best-in-class print speeds and professional- quality color for the lowest total cost of ownership in its class

HP PageWide printers also shine in the environmental arena In part because therersquos no fuser element needed to print PageWide devices use up to 84 less energy than in-class laser printers plus they have the smallest carbon footprint among printers in their class by a dramatic margin And fewer consumable parts means therersquos less maintenance required and fewer replacements needed over the life of the printer

Printing in your organization Not every business has the same printing needs Which printers you use depends on your business priorities and how your workforce approaches printing Some need centrally located printers for many people to print everyday documents Some have small workgroups who need dedicated high-quality color printing And some businesses need to also scan and fax documents Business parameters such as cost maintenance size security and service needs also determine which printer is the right fit

HPrsquos portfolio is designed to benefit any business no matter the size or need Wersquove taken into consideration all usage patterns and IT perspectives to make sure your printing fleet is the right match for your printing needs

Within our portfolio we also offer a host of services and technologies to optimize how your fleet operates improve security and enhance data management and workflows throughout your business HP Managed Print

Services combines our innovative hardware services and solutions into one integrated approach Working with you we assess deploy and manage your imaging and printing systemmdashtailoring it for where and when business happens

You can also tap into our individual print solutions such as HP JetAdvantage Solutions which allows you to configure devices conduct remote diagnostics and monitor supplies from one central interface HP JetAdvantage Security Solutions safeguard sensitive information as it moves through your business help protect devices data and documents and enforce printing policies across your organization And HP JetAdvantage Workflow Solutions help employees easily capture manage and share information and help make the most of your IT investment

Turning to HP To learn more about how to improve your printing environment visit hpcomgobusinessprinters You can explore the full range of HPrsquos business printing portfolio including HP PageWide LaserJet and OfficeJet Pro printers and MFPs as well as HPrsquos business printing solutions services and tools And an HP representative or channel partner can always help you evaluate and assess your print fleet and find the right printers MFPs solutions and services to help your business meet its goals Continue to look for more business innovations from HP

To learn more about specific claims visit wwwhpcomgopagewideclaims wwwhpcomgoLJclaims wwwhpcomgolearnaboutsupplies wwwhpcomgoprinterspeeds

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 28: Connect Converge Spring 2016

25

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 29: Connect Converge Spring 2016

26

IoT Evolution Today itrsquos almost impossible to read news about the tech industry without some reference to the Internet of Things (IoT) IoT is a natural evolution of machine-to-machine (M2M) technology and represents the interconnection of devices and management platforms that collectively enable the ldquosmart worldrdquo around us From wellness and health monitoring to smart utility meters integrated logistics and self-driving cars and the world of IoT is fast becoming a hyper-automated one

The market for IoT devices and applications and the new business processes they enable is enormous Gartner estimates endpoints of the IoT will grow at a 317 CAGR from 2013 through 2020 reaching an installed base of 208 billion units1 In 2020 66 billion ldquothingsrdquo will ship with about two-thirds of them consumer applications hardware spending on networked endpoints will reach $3 trillion in 20202

In some instances IoT may simply involve devices connected via an enterprisersquos own network such as a Wi-Fi mesh across one or more factories In the vast majority of cases however an enterprisersquos IoT network extends to devices connected in many disparate areas requiring connectivity over a number of connectivity options For example an aircraft in flight may provide feedback sensor information via satellite communication whereas the same aircraft may use an airportrsquos Wi-Fi access while at the departure gate Equally where devices cannot be connected to any power source a low-powered low-throughput connectivity option such as Sigfox or LoRa is needed

The evolutionary trajectorymdashfrom limited-capability M2M services to the super-capable IoT ecosystemmdashhas opened up new dimensions and opportunities for traditional communications infrastructure providers and industry-specific innovators Those who exploit the potential of this technologymdashto introduce new services and business modelsmdashmay be able to deliver

unprecedented levels of experience for existing services and in many cases transform their internal operations to match the needs of a hyper-connected world

Next-Generation IoT Solutions Given the requirement for connectivity many see IoT as a natural fit in the communications service providersrsquo (CSPs) domain such as mobile network operators although connectivity is a readily available commodity In addition some IoT use cases are introducing different requirements on connectivitymdash economic (lower average revenue per user) and technical (low- power consumption limited traffic mobility or bandwidth) which means a new type of connectivity option is required to improve efficiency and return on investment (ROI) of such use cases for example low throughput network connectivity

continued on pg 27

The focus now is on collecting data validating it enriching i t wi th ana l y t i c s mixing it with other sources and then exposing it to the applications t h a t e n a b l e e n te r p r i s e s to derive business value from these services

ldquo

rdquo

Delivering on the IoT Customer Experience

1 Gartner Forecast Internet of Things mdash Endpoints and Associated Services Worldwide 20152 The Internet of Things Making Sense of the Next Mega-Trend 2014 Goldman Sachs

Nigel UptonWorldwide Director amp General Manager IoTGCP Communications amp Media Solutions Communications Solutions Business Hewlett Packard Enterprise

Nigel returned to HPE after spending three years in software startups developing big data analytical solutions for multiple industries with a focus on mobility and drones Nigel has led multiple businesses with HPE in Telco Unified Communications Alliances and software development

Nigel Upton

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 30: Connect Converge Spring 2016

27

Value creation is no longer based on connecting devices and having them available The focus now is on collecting data validating it enriching it with analytics mixing it with other sources and then exposing it to the applications that enable enterprises to derive business value from these services

While there are already many M2M solutions in use across the market these are often ldquosilordquo solutions able to manage a limited level of interaction between the connected devices and central systems An example would be simply collecting usage data from a utility meter or fleet of cars These solutions are typically limited in terms of specific device type vertical protocol and business processes

In a fragmented ecosystem close collaboration among participants is required to conceive and deliver a service that connects the data monetization components including

bull Smart device and sensor manufacturers bull Systems integrators for M2MIoT services and industry-

specific applications bull Managed ICT infrastructure providers bull Management platforms providers for device management

service management charging bull Data processing layer operators to acquire data then

verify consolidate and support with analytics bull API (Application Programming Interface) management

platform providers to expose status and data to applications with partner relationship management (PRM) Market Place and Application Studio

With the silo approach integration must be redone for each and every use case IoT operators are saddled with multiple IoT silos and associated operational costs while being unable to scale or integrate these standalone solutions or evolve them to address other use cases or industries As a result these silos become inhibitors for growth as the majority of the value lies in streamlining a complete value chain to monetize data from sensor to application This creates added value and related margins to achieve the desired business cases and therefore fuels investment in IoT-related projects It also requires the high level of flexibility scalability cost efficiency and versatility that a next-generation IoT platform can offer

HPE Universal IoT Platform Overview For CSPs and enterprises to become IoT operators and monetize the value of IoT a need exists for a horizontal platform Such a platform must be able to easily onboard new use cases being defined by an application and a device type from any industry and manage a whole ecosystem from the time the application is on-boarded until itrsquos removed In addition the platform must also support scalability and lifecycle when the devices become distributed by millions over periods that could exceed 10 yearsHewlett Packard Enterprise (HPE) Communication amp Media Solutions (CMS) developed the HPE Universal IoT Platform specifically to address long-term IoT requirements At the

heart this platform adapts HPE CMSrsquos own carrier-grade telco softwaremdashwidely used in the communications industrymdash by adding specific intellectual property to deal with unique IoT requirements The platform also leverages HPE offerings such as cloud big data and analytics applications which include virtual private cloud and Vertica

The HPE Universal IoT Platform enables connection and information exchange between heterogeneous IoT devicesmdash standards and proprietary communicationmdashand IoT applications In doing so it reduces dependency on legacy silo solutions and dramatically simplifies integrating diverse devices with different device communication protocols HPE Universal IoT Platform can be deployed for example to integrate with the HPE Aruba Networks WLAN (wireless local area network) solution to manage mobile devices and the data they produce within the range of that network and integrating devices connected by other Wi-Fi fixed or mobile networks These include GPRS (2G and 3G) LTE 4G and ldquoLow Throughput Networksrdquo such as LoRa

On top of ubiquitous connectivity the HPE Universal IoT Platform provides federation for device and service management and data acquisition and exposure to applications Using our platform clients such as public utilities home automation insurance healthcare national regulators municipalities and numerous others can realize tremendous benefits from consolidating data that had been previously unobtainableWith the HPE Universal IoT Platform you can truly build for and capture new value from the proliferation of connected devices and benefit from

bull New revenue streams when launching new service offerings for consumers industries and municipalities

bull Faster time-to-value with accelerated deployment from HPE partnersrsquo devices and applications for selected vertical offerings

bull Lower total cost of ownership (TCO) to introduce new services with limited investment plus the flexibility of HPE options (including cloud-based offerings) and the ability to mitigate risk

By embracing new HPE IoT capabilities services and solutions IoT operatorsmdashCSPs and enterprises alikemdashcan deliver a standardized end-to-end platform and create new services in the industries of their B2B (Business-to-Business) B2C (Business-to-Consumers) and B2B2C (Business-to-Business-to-consumers) customers to derive new value from data

HPE Universal IoT Platform Architecture The HPE Universal IoT Platform architecture is aligned with the oneM2M industry standard and designed to be industry vertical and vendor-agnostic This supports access to different south-bound networks and technologies and various applications and processes from diverse application providers across multiple verticals on the north-bound side The HPE Universal

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 31: Connect Converge Spring 2016

28

IoT Platform enables industry-specific use cases to be supported on the same horizontal platform

HPE enables IoT operators to build and capture new value from the proliferation of connected devices Given its carrier grade telco applications heritage the solution is highly scalable and versatile For example platform components are already deployed to manage data from millions of electricity meters in Tokyo and are being used by over 170 telcos globally to manage data acquisition and verification from telco networks and applications

Alignment with the oneM2M standard and data model means there are already hundreds of use cases covering more than a dozen key verticals These are natively supported by the HPE Universal IoT Platform when standards-based largely adopted or industry-vertical protocols are used by the connected devices to provide data Where the protocol used by the device is not currently supported by the HPE Universal IoT Platform it can be seamlessly added This is a benefit of Network Interworking Proxy (NIP) technology which facilitates rapid developmentdeployment of new protocol connectors dramatically improving the agility of the HPE Universal IoT Platform against traditional platforms

The HPE Universal IoT Platform provides agnostic support for smart ecosystems which can be deployed on premises and also in any cloud environment for a comprehensive as-a-Service model

HPE equips IoT operators with end-to-end device remote management including device discovery configuration and software management The HPE Universal IoT Platform facilitates control points on data so you can remotely manage millions of IoT devices for smart applications on the same multi-tenant platform

Additionally itrsquos device vendor-independent and connectivity agnostic The solution operates at a low TCO (total cost of ownership) with high scalability and flexibility when combining the built-in data model with oneM2M standards It also has security built directly into the platformrsquos foundation enabling end-to-end protection throughout the data lifecycle

The HPE Universal IoT Platform is fundamentally built to be data centricmdashas data and its monetization is the essence of the IoT business modelmdashand is engineered to support millions of connections with heterogonous devices It is modular and can be deployed as such where only the required core modules can be purchased as licenses or as-a-Service with an option to add advanced modules as required The HPE Universal IoT Platform is composed of the following key modules

Device and Service Management (DSM) The DSM module is the nerve center of the HPE Universal IoT Platform which manages the end-to-end lifecycle of the IoT service and associated gatewaysdevices and sensors It provides a web-based GUI for stakeholders to interact with the platform

HPE Universal IoT Platform

Manage Sensors Verticals

Data Monetization

Chain Standard

Alignment Connectivity

Agnostic New Service

Offerings

copy Copyright Hewlett Packard Enterprise 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 32: Connect Converge Spring 2016

29

Hierarchical customer account modeling coupled with the Role-Based Access Control (RBAC) mechanism enables various mutually beneficial service models such as B2B B2C and B2B2C models

With the DSM module you can manage IoT applicationsmdashconfiguration tariff plan subscription device association and othersmdashand IoT gateways and devices including provisioning configuration and monitoring and troubleshoot IoT devices

Network Interworking Proxy (NIP) The NIP component provides a connected devices framework for managing and communicating with disparate IoT gateways and devices and communicating over different types of underlying networks With NIP you get interoperability and information exchange between the heterogeneous systems deployed in the field and the uniform oneM2M-compliant resource mode supported by the HPE Universal IoT Platform Itrsquos based on a lsquoDistributed Message Queuersquo architecture and designed to deal with the three Vsmdashvolume variety and velocitymdashtypically associated with handling IoT data

NIP is supported by the lsquoProtocol Factoryrsquo for rapid development of the device controllersproxies for onboarding new IoT protocols onto the platform It has built-in device controllers and proxies for IoT vendor devices and other key IoT connectivity protocols such as MQTT LWM2M DLMSCOSEM HTTP REST and others

Data Acquisition and Verification (DAV)DAV supports secure bi-directional data communication between IoT applications and IoT gatewaysdevices deployed in the field The DAV component uses the underlying NIP to interact and acquire IoT data and maintain it in a resource-oriented uniform data model aligned with oneM2M This data model is completely agnostic to the device or application so itrsquos completely flexible and extensible IoT applications in turn can discover access and consume these resources on the north-bound side using oneM2M-compliant HTTP REST interfaceThe DAV component is also responsible for transformation validation and processing of the IoT data

bull Transforming data through multiple steps that extend from aggregation data unit transformation and application- specific protocol transformation as defined by the rules

bull Validating and verifying data elements handling of missing ones through re-acquisition or extrapolation as defined in the rules for the given data element

bull Data processing and triggering of actions based on the type o f message such as a la rm process ing and complex-event processing

The DAV component is responsible for ensuring security of the platform covering

bull Registration of IoT devices unique identification of devices and supporting data communication only with trusted devices

bull Management of device security keys for secureencrypted communication

bull Access Control Policies manage and enforce the many-to-many communications between applications and devices

The DAV component uses a combination of data stores based on relational and columnar databases for storing IoT data ensuring enhanced performance even for distinct different types of operations such as transactional operations and analyticsbatch processing-related operations The columnar database used in conjunction with distributed file system-based storage provides for extended longevity of the data stored at an efficient cost This combination of hot and cold data storage enables analytics to be supported over a longer period of IoT data collected from the devices

Data Analytics The Data Analytics module leverages HPE Vertica technology for discovery of meaning patterns in data collected from devices in conjunction with other application-specific externally imported data This component provides a creation execution and visualization environment for most types of analytics including batch and real-timemdashbased on lsquoComplex-Event Processingrsquomdash for creating data insights that can be used for business analysis andor monetized by sharing insights with partnersIoT Data Analytics covers various types of analytical modeling such as descriptivemdashkey performance indicator social media and geo- fencing predictive determination and prescriptive recommendation

Operations and Business Support Systems (OSSBSS) The BSSOSS module provides a consolidated end-to-end view of devices gateways and network information This module helps IoT operators automate and prioritize key operational tasks reduce downtime through faster resolution of infrastructure issues improve service quality and enhance human and financial resources needed for daily operations The module uses field proven applications from HPErsquos own OSS portfolio such as lsquoTelecommunication Management Information Platformrsquo lsquoUnified Correlation Analyzerrsquo and lsquoOrder Managementrsquo

The BSSOSS module drives operational efficiency and service reliability in multiple ways

bull Correlation Identifies problems quickly through automated problem correlation and root-cause analysis across multiple infrastructure domains and determines impact on services

bull Automation Reduces service outage time by automating major steps in the problem-resolution process

The OSS Console supports business critical service operations and processes It provides real-time data and metrics that supports reacting to business change as it happens detecting service failures and protecting vital revenue streams

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 33: Connect Converge Spring 2016

30

Data Service Cloud (DSC) The DSC module enables advanced monetization models especially fine-tuned for IoT and cloud-based offerings DSC supports mashup for new content creation providing additional insight by combining embedded IoT data with internal and external data from other systems This additional insight can provide value to other stakeholders outside the immediate IoT ecosystem enabling monetization of such information

Application Studio in DSC enables rapid development of IoT applications through reusable components and modules reducing the cost and time-to-market for IoT applications The DSC a partner-oriented layer securely manages the stakeholder lifecycle in B2B and B2B2C models

Data Monetization Equals Success The end game with IoT is to securely monetize the vast treasure troves of IoT-generated data to deliver value to enterprise applications whether by enabling new revenue streams reducing costs or improving customer experience

The complex and fragmented ecosystem that exists within IoT requires an infrastructure that interconnects the various components of the end-to-end solution from device through to applicationmdashto sit on top of ubiquitous securely managed connectivity and enable identification development and roll out of industry-specific use cases that deliver this value

With the HPE Universal IoT Platform architecture you get an industry vertical and client-agnostic solution with high scalability modularity and versatility This enables you to manage your IoT solutions and deliver value through monetizing the vast amount of data generated by connected devices and making it available to enterprise-specific applications and use cases

CLICK HERE TO LEARN MORE

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 34: Connect Converge Spring 2016

31

WHY BIG DATA MAKES BIG SENSE FOR EVERY SIZE BUSINESS If yoursquove read the book or seen the movie Moneyball you understand how early adoption of data analysis can lead to competitive advantage and extraordinary results In this true story the general manager of the Oakland As Billy Beane is faced with cuts reducing his budget to one of the lowest in his league Beane was able to build a successful team on a shoestring budget by using data on players to find value that was not obvious to other teams Multiple playoff appearances later Beane was voted one of the Top 10 GMsExecutives of the Decade and has changed the business of baseball forever

We might not all be able to have Brad Pitt portray us in a movie but the ability to collect and analyze data to build successful businesses is within reach for all sized businessestoday

NOT JUST FOR LARGE ENTERPRISES ANYMORE If you are a small to midsize business you may think that Big Data is not for you In this context the word ldquobigrdquo can be misleading It simply means the ability to systematically collect and analyze data (analytics) and to use insights from that data to improve the business The volume of data is dependent on the size of the company the insights gleaned from it are not

As implementation prices have decreased and business benefits have increased early SMB adopters are recognizing the profound bottom line impact Big Data can make to a business This early adopter competitive advantage is still there but the window is closing Now is the perfect time to analyze your business processes and implement effective data analysis tools and infrastructure Big Data technology has evolved to the point where it is an important and affordable tool for businesses of all sizes

Big data is a special kind of alchemy turning previously ignored data into business gold

QUICK GUIDE TO INCREASING PROFITS WITH

BIG DATATECHNOLOGY

Kelley Bowen

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 35: Connect Converge Spring 2016

32

BENEFITS OF DATA DRIVEN DECISION MAKING Business intelligence from systematic customer data analysis can profoundly impact many areas of the business including

1 Improved products By analyzing customer behavior it is possible to extrapolate which product features provide the most value and which donrsquot

2 Better business operations Information from accounting cash flow status budgets inventory human resources and project management all provide invaluable insights capable of improving every area of the business

3 Competitive advantage Implementation of business intelligence solutions enables SMBs to become more competitive especially with respect to competitors who donrsquot use such valuable information

4 Reduced customer turnover The ability to identify the circumstance when a customer chooses not to purchase a product or service provides powerful insight into changing that behavior

GETTING STARTED Keep it simple with customer data To avoid information overload start small with data that is collected from your customers Target buyer behavior by segmenting and separating first-time and repeat customers Look at differences in purchasing behavior which marketing efforts have yielded the best results and what constitutes high-value and low-value buying behaviors

According to Zoher Karu eBayrsquos vice president of global customer optimization and data the best strategy is to ldquotake one specific process or customer touch point make changes based on data for that specific purpose and do it in a way thatrsquos repeatablerdquo

PUT THE FOUNDATION IN PLACE Infrastructure considerations In order to make better decisions using customer data you need to make sure your servers networking and storage offer the performance scale and reliability required to get the most out of your stored information You need a simple reliable affordable solution that will deliver enterprise-grade capabilities to store access manage and protect your data

Turnkey solutions such as the HPE Flex Solutions for SMB with Microsoft SQL Server 2014 enable any-sized business to drive more revenue from critical customer information This solution offers built-in security to protect your customersrsquo critical information assets and is designed for ease of deployment It has a simple to use familiar toolset and provides data protection together with optional encryption Get more information in the whitepaper Why Hewlett Packard Enterprise platforms for BI with Microsoftreg SQL Server 2014

Some midsize businesses opt to work with an experienced service provider to deploy a Big Data solution

LIKE SAVING FOR RETIREMENT THE EARLIER YOU START THE BETTER One thing is clear ndash the time to develop and enhance your data insight capability is now For more information read the e-Book Turning big data into business insights or talk to your local reseller for help

Kelley Bowen is a member of Hewlett Packard Enterprisersquos Small and Midsized Business Marketing Segment team responsible for creating awareness for HPErsquos Just Right IT portfolio of products solutions and services for SMBs

Kelley works closely with HPErsquos product divisions to create and deliver best-of-breed IT solutions sized and priced for the unique needs of SMBs Kelley has more than 20 years of high-tech strategic marketing and management experience with global telecom and IT manufacturers

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 36: Connect Converge Spring 2016

33

As the Customer References Manager at Aruba a Hewlett Packard Enterprise company I engage with customers and learn how our products solve their problems Over

and over again I hear that they are seeing explosive growth in the number of devices accessing their networks

As these demands continue to grow security takes on new importance Most of our customers have lean IT teams and need simple automated easy-to-manage security solutions their teams can deploy They want robust security solutions that easily enable onboarding authentication and policy management creation for their different groups of users ClearPass delivers these capabilities

Below Irsquove shared how customers across different vertical markets have achieved some of these goals The Denver Museum of Nature and Science hosts 14 million annual guests each year who are treated to robust Aruba Wi-Fi access and mobility-enabled exhibits throughout the 716000 sq ft facility

The Museum also relies on Aruba ClearPass to make external access privileges as easy to manage as internal credentials ClearPass Guest gives Museum visitors and contractors rich secure guest access thatrsquos automatically separated from internal traffic

To safeguard its multivendor wireless and wired environment the Museum uses ClearPass for complete network access control ClearPass combines ultra-scalable next-generation AAA (Authentication Authorization and Accounting) services with a policy engine that leverages contextual data based on user roles device types app usage and location ndash all from a single platform Read the case study

Lausanne University Hospital (Centre Hospitalier Universitaire Vaudois or CHUV) uses ClearPass for the authentication of staff and guest access for patients their families and others Built-in ClearPass device profiling capabilities to create device-specific enforcement policies for differentiated access User access privileges can be easily granted or denied based on device type ownership status or operating system

CHUV relies on ClearPass to deliver Internet access to patients and visitors via an easy-to-use portal The IT organization loves the limited configuration and management requirements due to the automated workflow

On average they see 5000 devices connected to the network at any time and have experienced good consistent performance meeting the needs of staff patients and visitors Once the environment was deployed and ClearPass configured policy enforcement and overall maintenance decreased freeing up the IT for other things Read the case study

Trevecca Nazarene University leverages Aruba ClearPass for network access control and policy management ClearPass provides advanced role management and streamlined access for all Trevecca constituencies and guests During Treveccarsquos most recent fall orientation period ClearPass helped the institution shine ldquoOver three days of registration we had over 1800 new devices connect through ClearPass with no issuesrdquo said John Eberle Deputy CIO of Infrastructure ldquoThe tool has proven to be rock solidrdquo Read the case study

If your company is looking for a security solution that is simple automated easy-to-manage and deploy with low maintenance ClearPass has your security concerns covered

SECURITY CONCERNS CLEARPASS HAS YOU COVERED

Diane Fukuda

Diane Fukuda is the Customer References Manager for Aruba a Hewlett Packard Enterprise Company She is a seasoned marketing professional who enjoys engaging with customers learning how they use technology to their advantage and telling their success stories Her hobbies include cycling scuba diving organic gardening and raising chickens

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 37: Connect Converge Spring 2016

34

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 38: Connect Converge Spring 2016

35

The latest reports on IT security all seem to point to a similar trendmdashboth the frequency and costs of cyber crime are increasing While that may not be too surprising the underlying details and sub-trends can sometimes

be unexpected and informative The Ponemon Institutersquos recent report ldquo2015 Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise definitely provides some noteworthy findings which may be useful for NonStop users

Here are a few key findings of that Ponemon study which I found insightful

Cyber crime cost is highest in industry verticals that also rely heavily on NonStop systems The report finds that cost of cyber crime is highest by far in the Financial Services and Utilities amp Energy sectors with average annualized costs of $135 million and $128 million respectively As we know these two verticals are greatly dependent on NonStop Other verticals with high average cyber crime costs that are also major users of NonStop systems include Industr ial Transportation Communications and Retail industries So while wersquove not seen the NonStop platform in the news for security breaches itrsquos clear that NonStop systems operate in industries frequently targeted by cyber criminals and which suffer high costs of cyber crimemdashwhich means NonStop systems should be protected accordingly

Business disruption and information loss are the most expensive consequences of cyber crime Among the participants in the study business disruption and information loss represented the two most expensive sources of external costs 39 and 35 of costs respectively Given the types of mission-critical business applications that often run on the NonStop platform these sources of cyber crime cost should be of high interest to NonStop users and need to be protected against (for example protecting against data breaches with a NonStop tokenization or encryption solution)

Ken ScudderSenior Director Business Development Strategic Alliances Ken joined XYPRO in 2012 with more than a decade of enterprise

software experience in product management sales and business development Ken is PCI-ISA certified and his previous experience includes positions at ACI Worldwide CA Technologies Peregrine Systems (now part of HPE) and Arthur Andersen Business Consulting A former navy officer and US diplomat Ken holds an MBA from the University of Southern California and a Bachelor of Science degree from Rensselaer Polytechnic Institute

Ken Scudder XYPRO Technology

Has Important Insights For Nonstop Users

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 39: Connect Converge Spring 2016

36

Malicious insider threat is most expensive and difficult to resolve per incident The report found that 98-99 of the companies experienced attacks from viruses worms Trojans and malware However while those types of attacks were most widespread they had the lowest cost impact with an average cost of $1900 (weighted by attack frequency) Alternatively while the study found that ldquoonlyrdquo 35 of companies had had malicious insider attacks those attacks took the longest to detect and resolve (on average over 54 days) And with an average cost per incident of $144542 malicious insider attacks were far more expensive than other cyber crime types Malicious insiders typically have the most knowledge when it comes to deployed security measures which allows them to knowingly circumvent them and hide their activities As a first step locking your system down and properly securing access based on NonStop best practices and corporate policy will ensure users only have access to the resources needed to do their jobs A second and critical step is to actively monitor for suspicious behavior and deviation from normal established processesmdashwhich can ensure suspicious activity is detected and alerted on before it culminates in an expensive breach

Basic security is often lacking Perhaps the most surprising aspect of the study to me at least was that so few of the companies had common security solutions deployed Only 50 of companies in the study had implemented access governance tools and fewer than 45 had deployed security intelligence systems or data protection solutions (including data-in-motion protection and encryption or tokenization) From a NonStop perspective this highlights the critical importance of basic security principles such as strong user authentication policies of minimum required access and least privileges no shared super-user accounts activity and event logging and auditing and integration of the NonStop system with an enterprise SIEM (like HPE ArcSight) Itrsquos very important to note that HPE includes XYGATE User Authentication (XUA) XYGATE Merged Audit (XMA) NonStop SSLTLS and NonStop SSH in the NonStop Security Bundle so most NonStop customers already have much of this capability Hopefully the NonStop community is more security conscious than the participants in this studymdashbut we canrsquot be sure and itrsquos worth reviewing whether security fundamentals are adequately implemented

Security solutions have strong ROI While itrsquos dismaying to see that so few companies had deployed important security solutions there is good news in that the report shows that implementation of those solutions can have a strong ROI For example the study found that security intelligence systems had a 23 ROI and encryption technologies had a 21 ROI Access governance had a 13 ROI So while these security solutions arenrsquot as widely deployed as they should be there is a good business case for putting them in place

Those are just a few takeaways from an excellent study there are many additional interesting points made in the report and itrsquos worth a full read The good news is that today there are many great security products available to help you manage security on your NonStop systemsmdashincluding products sold by HPE as well as products offered by NonStop partners such as XYPRO comForte and Computer Security Products

As always if you have questions about NonStop security please feel free to contact me kennethscudderxyprocom or your XYPRO sales representative

Statistics and information in this article are based on the Ponemon Institute ldquo2015

Cost of Cyber Crime Study Globalrdquo sponsored by Hewlett Packard Enterprise

Ken Scudder Sr Director Business Development and Strategic Alliances XYPRO Technology Corporation

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 40: Connect Converge Spring 2016

37

I recently had the opportunity to chat with Tom Moylan Director of Sales for HP NonStop Americas and his successor Jeff Skinner about Tomrsquos upcoming retirement their unique relationship and plans for the future of NonStop

Gabrielle Tell us about how things have been going while Tom prepares to retire

Jeff Tom is retiring at the end of May so we have him doing special projects and advising as he prepares to leave next year but I officially moved into the new role on November 1 2015 Itrsquos been awesome to have him in the background and be able leverage his experience while Irsquom growing into it Irsquom really lucky to have that

Gabrielle So the transition has already taken place

Jeff Yeah The transition really was November 1 2015 which is also the first day of our new fiscal year so thatrsquos how we wanted to tie that together Itrsquos been a natural transition It wasnrsquot a big shock to the system or anything

Gabrielle So it doesnrsquot differ too much then from your previous role

Jeff No itrsquos very similar Wersquore both exclusively NonStop-focused and where I was assigned to the western territory before now I have all of the Americas Itrsquos very familiar in terms of processes talent and people I really feel good about moving into the role and Irsquom definitely ready for it

Gabrielle Could you give us a little bit of information about your background leading into your time at HPE

Jeff My background with NonStop started in the late 90s when Tom originally hired me at Tandem He hired me when I was only a couple of years out of school to manage some of the smaller accounts in the Chicago area It was a great experience and Tom took a chance on me by hiring me as a person early in their career Thatrsquos what got him and me off on our start together It was a challenging position at the time but it was good because it got me in the door

Tom At the time it was an experiment on my behalf back in the early Tandem days and there was this idea of hiring a lot of younger people The idea was even though we really lacked an education program to try to mentor these young people and open new markets for Tandem And there are a lot of funny stories that go along with that

Gabrielle Could you share one

Tom Well Jeff came in once and he said ldquoI have to go home because my mother was in an accidentrdquo He reassured me it was just a small fender bendermdashnothing seriousmdashbut she was a little shaken up Irsquom visualizing an elderly woman with white hair hunched over in her car just peering over the steering wheel going 20mph in a 40mph zone and I thought ldquoHis poor old motherrdquo I asked how old she was and he said ldquo56rdquo I was 57 at the time She was my age He started laughing and I realized then he was so young Itrsquos just funny when you start getting to sales engagement and yoursquore peers and then you realize this difference in age

Jeff When Compaq acquired Tandem I went from being focused primarily on NonStop to selling a broader portfolio of products I sold everything from PCs to Tandem equipment It became a much broader sales job Then I left Compaq to join one of Jimmy Treybigrsquos startup companies It was

PASSING THE TORCH HPErsquos Jeff Skinner Steps Up to Replace His Mentor

by Gabrielle Guerrera

Gabrielle Guerrera is the Director of Business Development at NuWave Technologies a NonStop middleware company founded and managed by her father Ernie Guerrera She has a BS in Business Administration from Boston University and is an MBA candidate at Babson College

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 41: Connect Converge Spring 2016

38

really ecommerce-focused and online transaction processing (OLTP) focused which came naturally to me because of my background as it would be for anyone selling Tandem equipment

I did that for a few years and then I came back to NonStop after HP acquired Compaq so I came back to work for Tom a second time I was there for three more years then left again and went to IBM for five years where I was focused on financial services Then for the third and final time I came back to work for Tom again in 20102011 So itrsquos my third tour of duty here and itrsquos been a long winding road to get to this point Tom without question has been the most influential person on my career and as a mentor Itrsquos rare that you can even have a mentor for that long and then have the chance to be able to follow in their footsteps and have them on board as an advisor for six months while you take over their job I donrsquot know that I have ever heard that happening

Gabrielle Thatrsquos such a great story

Jeff Itrsquos crazy really You never hear anyone say that kind of stuff Even when I hear myself say it itrsquos like ldquoWow That is pretty coolrdquo And the talent we have on this team is amazing Wersquore a seasoned veteran group for the most part There are people who have been here for over 30 years and therersquos consistent account coverage over that same amount of time You just donrsquot see that anywhere else And the camaraderie we have with the group not only within the HPE team but across the community everybody knows each other because they have been doing it for a long time Maybe itrsquos out there in other places I just havenrsquot seen it The people at HPE are really unconditional in the way that they approach the job the customers and the partners All of that just lends itself to the feeling you would want to have

Tom Every time Jeff left he gained a skill The biggest was when he left to go to IBM and lead the software marketing group there He came back with all kinds of wonderful ideas for marketing that we utilize to this day

Jeff If you were to ask me five years ago where I would envision myself or what would I want to be doing Irsquom doing it Itrsquos a little bit surreal sometimes but at the same time itrsquos an honor

Tom Jeff is such a natural to lead NonStop One thing that I donrsquot do very well is I donrsquot have the desire to get involved with marketing Itrsquos something Irsquom just not that interested in but Jeff is We are at a very critical and exciting time with NonStop X where marketing this is going to be absolutely the highest priority Hersquos the right guy to be able to take NonStop to another level

Gabrielle It really is a unique community I think we are all lucky to be a part of it

Jeff Agreed

Tom Irsquove worked for eight different computer companies in different roles and titles and out of all of them the best group of people with the best product has always been NonStop For me there are four reasons why selling NonStop is so much fun

The first is that itrsquos a very complex product but itrsquos a fun product Itrsquos a value proposition sell not a commodity sell

Secondly itrsquos a relationship sell because of the nature of the solution Itrsquos the highest mission-critical application within our customer base If this system doesnrsquot work these customers could go out of business So that just screams high-level relationships

Third we have unbelievable support The solution architects within this group are next to none They have credibility that has been established over the years and they are clearly team players They believe in the team concept and theyrsquore quick to jump in and help other people

And the fourth reason is the Tandem culture What differentiates us from the greater HPE is this specific Tandem culture that calls for everyone to go the extra mile Thatrsquos why I feel like NonStop is unique Itrsquos the best place to sell and work It speaks volumes of why we are the way we are

Gabrielle Jeff what was it like to have Tom as your long-time mentor

Jeff Itrsquos been awesome Everybody should have a mentor but itrsquos a two-way street You canrsquot just say ldquoI need a mentorrdquo It doesnrsquot work like that It has to be a two-way relationship with a person on the other side of it willing to invest the time energy and care to really be effective in being a mentor Tom has been not only the most influential person in my career but also one of the most influential people in my life To have as much respect for someone in their profession as I have for Tom to get to admire and replicate what they do and to weave it into your own style is a cool opportunity but thatrsquos only one part of it

The other part is to see what kind person he is overall and with his family friends and the people that he meets Hersquos the real deal Irsquove just been really really lucky to get to spend all that time with him If you didnrsquot know any better you would think hersquos a salesmanrsquos salesman sometimes because he is so gregarious outgoing and such a people person but he is absolutely genuine in who he is and he always follows through with people I couldnrsquot have asked for a better person to be my mentor

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 42: Connect Converge Spring 2016

39

Gabrielle Tom what has it been like from your perspective to be Jeffrsquos mentor

Tom Jeff was easy Hersquos very bright and has a wonderful sales personality Itrsquos easy to help people achieve their goals when they have those kinds of traits and Jeff is clearly one of the best in that area

A really fun thing for me is to see people grow in a job I have been very blessed to have been mentoring people who have gone on to do some really wonderful things Itrsquos just something that I enjoy doing more than anything else

Gabrielle Tom was there a mentor who has motivated you to be able to influence people like Jeff

Tom Oh yes I think everyone looks for a mentor and Irsquom no exception One of them was a regional VP of Tandem named Terry Murphy We met at Data General and hersquos the one who convinced me to go into sales management and later he sold me on coming to Tandem Itrsquos a friendship thatrsquos gone on for 35 years and we see each other very often Hersquos one of the smartest men I know and he has great insight into the sales process To this day hersquos one of my strongest mentors

Gabrielle Jeff what are some of the ideas you have for the role and for the company moving forward

Jeff One thing we have done incredibly well is to sustain our relationship with all of the manufacturers and all of the industries that we touch I canrsquot imagine doing a much better job in servicing our customers who are the first priority always But what I really want to see us do is take an aggressive approach to growth Everybody always wants to grow but I think we are at an inflection point here where we have a window of opportunity to do that whether thatrsquos with existing customers in the financial services and payments space expanding into different business units within that industry or winning entirely new customers altogether We have no reason to think we canrsquot do that So for me I want to take an aggressive and calculated approach to going after new business and I also want to make sure the team is having some fun doing it Thatrsquos

really the message I want to start to get across to our own people and I want to really energize the entire NonStop community around that thought too I know our partners are all excited about our direction with

hybrid architectures and the potential of NonStop-as-a-Service down the road We should all feel really confident about the next few years and our ability to grow top line revenue

Gabrielle When Tom leaves in the spring whatrsquos the first order of business once yoursquore flying solo and itrsquos all yours

Jeff Thatrsquos an interesting question because the benefit of having him here for this transition for this six months is that I feel like there wonrsquot be a hard line where all of a sudden hersquos not here anymore Itrsquos kind of strange because I havenrsquot really thought too much about it I had dinner with Tom and his wife the other night and I told them that on June first when we have our first staff call and hersquos not in the virtual room thatrsquos going to be pretty odd Therersquos not necessarily a first order of business per se as it really will be a continuation of what we would have been doing up until that point I definitely am not waiting until June to really get those messages across that I just mentioned Itrsquos really an empowerment and the goals are to make Tom proud and to honor what he has done as a career I know I will have in the back of my mind that I owe it to him to keep the momentum that hersquos built Itrsquos really just going to be putting work into action

Gabrielle Itrsquos just kind of a bittersweet moment

Jeff Yeah absolutely and itrsquos so well-deserved for him His job has been everything to him so I really feel like I am succeeding a legend Itrsquos bittersweet because he wonrsquot be there day-to-day but I am so happy for him Itrsquos about not screwing things up but itrsquos also about leading NonStop into a new chapter

Gabrielle Yes Tom is kind of a legend in the NonStop space

Jeff He is Everybody knows him Every time I have asked someone ldquoDo you know Tom Moylanrdquo even if it was a few degrees of separation the answer has always been ldquoYesrdquo And not only yes but ldquoWhat a great guyrdquo Hersquos been the face of this group for a long time

Gabrielle Well it sounds like an interesting opportunity and at an interesting time

Jeff With what we have now with NonStop X and our hybrid direction it really is an amazing time to be involved with this group Itrsquos got a lot of people energized and itrsquos not lost on anyone especially me I think this will be one of those defining times when yoursquore sitting here five years from now going ldquoWow that was really a pivotal moment for us in our historyrdquo Itrsquos cool to feel that way but we just need to deliver on it

Gabrielle We wish you the best of luck in your new position Jeff

Jeff Thank you

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 43: Connect Converge Spring 2016

40

SQLXPressNot just another pretty face

An integrated SQL Database Manager for HP NonStop

Single solution providing database management visual query planner query advisor SQL whiteboard performance monitoring MXCS management execution plan management data import and export data browsing and more

With full support for both SQLMP and SQLMX

Learn more atxyprocomSQLXPress

copy2016 XYPRO Technology Corporation All rights reserved Brands mentioned are trademarks of their respective companies

NewNow audits 100

of all SQLMX amp

MP user activity

XYGATE Merged AuditIntregrated with

C

M

Y

CM

MY

CY

CMY

K

SQLXPress 2016pdf 1 332016 30519 PM

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 44: Connect Converge Spring 2016

41

The Open Source on OpenVMS Community has been working over the last several months to improve the quality as well as the quantity of open source facilities available on OpenVMS Efforts have focused on improving the GNV environment Th is has led to more e f for t in port ing newer versions of open source software packages a l ready por ted to OpenVMS as we l l as additional packages There has also been effort to expand the number of platforms supported by the new GNV packages being published

For those of you who have been under a rock for the last decade or more GNV is the acronym used for the Open Source Porting Environment on OpenVMS There are various expansions of the acronym GNUrsquos NOT VMS GNU for OpenVMS and surely there are others The c losest type o f implementat ion which is o f a similar nature is Cygwin on Microsoft Windows which implements a similar GNU-like environment that platform

For years the OpenVMS implementation has been sort of a poor second cousin to much of the development going on for the rest of the software on the platform The most recent ldquoofficialrdquo release was in November of 2011 when version 301 was released While that re l e a s e s o m a n y u p d ate s t h e re w e re s t i l l m a n y issues ndash not the least of which was the version of the bash script handler (a focal point of much of the GNV environment) was sti l l at version 1148 which was released somewhere around 1997 This was the same bash version that had been in GNV version 213 and earlier

In 2012 there was a Community e f for t star ted to improve the environment The number of people active at any one t ime varies but there are wel l over 100 interested parties who are either on mailing lists or

who review the monthly conference call notes or listen to the con-call recordings The number of parties who get very active is smaller But we know there are some very interested organizations using GNV and as it improves we expect this to continue to grow

New GNV component update kits are now available These kits do not require installing GNV to use

If you do installupgrade GNV then GNV must be installed first and upgrading GNV using HP GNV kits renames the [vms$commongnv] directory which causes all sorts of complications

For the first time there are now enough new GNV components so that by themselves you can run most unmodified configure and makefiles on AlphaOpenVMS 83+ and IA64OpenvVMS 84+

bullar_tools AR simulation tools bullbash bullcoreutils bullgawk bullgrep bullld_tools CCLDC++CPP simulation tools bullmake bullsed

What in the World of Open Source

Bill Pedersen

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 45: Connect Converge Spring 2016

42

Ar_tools and ld_tools are wrappers to the native OpenVMS utilities The make is an older fork of GNU Make The rest of the utilities are as of Jan 2016 up to date with the current release of the tools from their main development organizations

The ldccc++cpp wrapper automatically look for additional optional OpenVMS specific source files and scripts to run to supplement its operation which means you just need to set some environment variables and add the OpenVMS specific files before doing the configure and make

Be sure to read the release notes for helpful information as well as the help options of the utilities

The porting effort of John Malmberg of cPython 36a0+ is an example of using the above tools for a build It is a work-in-progress that currently needs a working port of libffi for the build to continue but it is creating a functional cPython 36a0+ Currently it is what John is using to sanity test new builds of the above components

Additional OpenVMS scripts are called by the ld program to scan the source for universal symbols and look them up in the CXX$DEMANGLER_DB

The build of cPython 36a0+ creates a shared python library and then builds almost 40 dynamic plugins each a shared image These scripts do not use the search command mainly because John uses NFS volumes and the OpenVMS search command for large searches has issues with NFS volumes and file

The Bash Coreutils Gawk Grep and Sed and Curl ports use a config_hcom procedure that reads a confighin file and can generate about 95 percent of it correctly John uses a product speci f ic scr ipt to generate a config_vmsh file for the stuff that config_hcom does not know how to get correct for a specific package before running the config_hcom

The config_hcom generates a configh file that has a include ldquoconfig_vmshrdquo in at the end of it The config_hcom scripts have been tested as far back as vaxvms 73 and can find most ways that a confighin file gets named on unpacking on an ODS-2 volume in addition to handling the ODS-5 format name

In many ways the abil ity to easily port either Open Source Software to OpenVMS or to maintain a code base consistent between OpenVMS and other platforms is crucial to the future of OpenVMS Important vendors use GNV for their efforts These include Oracle VMS Software Inc eCube Systems and others

Some of the new efforts in porting have included LLVM (Low Level Virtual Machine) which is forming the basis of new compiler back-ends for work being done by VMS Software Inc Updated ports in progress for Samba and Kerberos and others which have been held back by lack of a complete infrastructure that support the build environment used by these and other packages reliably

There are tools that are not in the GNV utility set that are getting updates and being kept current on a regular basis as well These include a new subprocess module for Python as well as new releases of both cURL and zlib

These can be found on the SourceForge VMS-Ports project site under ldquoFilesrdquo

All of the most recent IA64 versions of the GNV PCSI kits mentioned above as well as the cURL and zlib kits will install on both HP OpenVMS V84 and VSI OpenVMS V84-1H1 and above There is also a PCSI kit for GNV 302 which is specific to VSI OpenVMS These kits are as previously mentioned hosted on SourceForge on either the GNV project or the VMS-Ports project continued on page 41

Mr Pedersen has over 40 years of experience in the DECCompaqHP computing environment His experience has ranged from supporting scientific experimentation using computers including Nobel

Physicists and multi-national Oceanography Cruises to systems management engineering management project management disaster recovery and open source development He has worked for various educational and research organizations Digital Equipment Corporation several start-ups Stromasys Inc and had his own OpenVMS centered consultancy for over 30 years He holds a Bachelorrsquos of Science in Physical and Chemical Oceanography from the University of Washington He is also the Director of the South Carolina Robotics Education Foundation a nonprofit project oriented STEM education outreach organization the FIRST Tech Challenge affiliate partner for South Carolina

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 46: Connect Converge Spring 2016

43

continued from page 40 Some Community members have their own sites where they post their work These include Jouk Jansen Ruslan Laishev Jean-Franccedilois Pieacuteronne Craig Berry Mark Berryman and others

Jouk Jansenrsquos site Much of the work Jouk is doing is targeted at scientific analysis But along the way he has also been responsible for ports of several general purpose utilities including clamAV anti-virus software A2PS and ASCII to Postscript converter an older version of Bison and many others A quick count suggests that Joukrsquos repository has over 300 packages Links from Joukrsquos site get you to Hunter Goatleyrsquos Archive Patrick Moreaursquos archive and HPrsquos archive

Ruslanrsquos siteRecently Ruslan announced an updated version of POP3 Ruslan has also recently added his OpenVMS POP3 server kit to the VMS-Ports SourceForge project as well

Hunterrsquos archiveHunterrsquos archive contains well over 300 packages These are both open source packages and freewareDECUSware packages Some are specific to OpenVMS while others are ports to OpenVMS

The HPE Open Source and Freeware archivesThere are well over 400 packages available here Yes there is some overlap with other archives but then there are also unique offerings such as T4 or BLISS

Jean-Franccedilois is active in the Python community and distributes Python of OpenVMS as well as several Python based applications including the Mercurial SCM system Craig is a longtime maintainer of Perl on OpenVMS and an active member of the Open Source on OpenVMS Community Mark has been active in Open Source for many years He ported MySQL started the port of PostgreSQL and has also ported MariaDB

As more and more of the GNU environment gets updated and tested on OpenVMS the effort to port newer and more critical Open Source application packages are being ported to OpenVMS The foundation is getting stronger every day We still have many tasks ahead of us but we are moving forward with all the effort that the Open Source on OpenVMS Community members contribute

Keep watching this space for more progress

We would be happy to see your help on the projects as well

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 47: Connect Converge Spring 2016

44

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 48: Connect Converge Spring 2016

45

Legacy systems remain critical to the continued operation of many global enterprises Recent cyber-attacks suggest legacy systems remain under protected especially considering

the asset values at stake Development of risk mitigations as point solutions has been minimally successful at best completely ineffective at worst

The NIST FFX data protection standard provides publically auditable data protection algorithms that reflect an applicationrsquos underlying data structure and storage semantics Using data protection at the application level allows operations to continue after a data breach while simultaneously reducing the breachrsquos consequences

This paper will explore the application of data protection in a typical legacy system architecture Best practices are identified and presented

Legacy systems defined Traditionally legacy systems are complex information systems initially developed well in the past that remain critical to the business in which these systems operate in spite of being more difficult or expensive to maintain than modern systems1 Industry consensus suggests that legacy systems remain in production use as long as the total replacement cost exceeds the operational and maintenance cost over some long but finite period of time

We can classify legacy systems as supported to unsupported We consider a legacy system as supported when operating system publisher provides security patches on a regular open-market basis For example IBM zOS is a supported legacy system IBM continues to publish security and other updates for this operating system even though the initial release was fifteen years ago2

We consider a legacy system as unsupported when the publisher no longer provides regular security updates For example Microsoft Windows XP and Windows Server 2003 are unsupported legacy systems even though the US Navy obtains security patches for a nine million dollar annual fee3 as such patches are not offered to commercial XP or Server 2003 owners

Unsupported legacy systems present additional security risks as vulnerabilities are discovered and documented in more modern systems attackers use these unpatched vulnerabilities

to exploit an unsupported system Continuing this example Microsoft has published 110 security bulletins for Windows 7 since the retirement of XP in April 20144 This presents dozens of opportunities for hackers to exploit organizations still running XP

Security threats against legacy systems In June 2010 Roel Schouwenberg of anti-virus software firm Kaspersky Labs discovered and publishing the inner workings of the Stuxnet computer virus5 Since then organized and state-sponsored hackers have profited from this cookbook for stealing data We can validate the impact of such well-orchestrated breaches on legacy systems by performing an analysis on security breach statistics publically published by Health and Human Services (HHS) 6

Even though the number of health care security breach incidents between 2010 and 2015 has remained constant bounded by O(1) the number of records exposed has increased at O(2n) as illustrated by the following diagram1

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

1This analysis excludes the Anthem Inc breach reported on March 13 2015 as it alone is two times larger than the sum of all other breaches reported to date in 2015

Jason Paul Kazarian is a Senior Architect for Hewlett Packard Enterprise and specializes in integrating data security products with third-party subsystems He has thirty years of industry experience in the aerospace database security and telecommunications

domains He has an MS in Computer Science from the University of Texas at Dallas and a BS in Computer Science from California State University Dominguez Hills He may be reached at

jasonkazarianhpecom

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 49: Connect Converge Spring 2016

46

Analysis of the data breach types shows that 31 are caused by either an outside attack or inside abuse split approximately 23 between these two types Further 24 of softcopy breach sources were from shared resources for example from emails electronic medical records or network servers Thus legacy systems involved with electronic records need both access and data security to reduce the impact of security breaches

Legacy system challenges Applying data security to legacy systems presents a series of interesting challenges Without developing a specific taxonomy we can categorize these challenges in no particular order as follows

bull System complexity legacy systems evolve over time and slowly adapt to handle increasingly complex business operations The more complex a system the more difficulty protecting that system from new security threats

bull Lack of knowledge the original designers and implementers of a legacy system may no longer be available to perform modifications7 Also critical system elements developed in-house may be undocumented meaning current employees may not have the knowledge necessary to perform modifications In other cases software source code may have not survived a storage device failure requiring assembly level patching to modify a critical system function

bull Legal limitations legacy systems participating in regulated activities or subject to auditing and compliance policies may require non-engineering resources or permissions before modifying the system For example a payment system may be considered evidence in a lawsuit preventing modification until the suit is settled

bull Subsystem incompatibility legacy system components may not be compatible with modern day hardware integration software or other practices and technologies Organizations may be responsible for providing their own development and maintenance environments without vendor support

bull Hardware limitations legacy systems may have adequate compute communication and storage resources for accomplishing originally intended tasks but not sufficient reserve to accommodate increased computational and storage responsibilities For example decrypting data prior to each and every use may be too performance intensive for existing legacy system configurations

These challenges intensify if the legacy system in question is unsupported One key obstacle is vendors no longer provide resources for further development For example Apple Computer routinely stops updating systems after seven years8 It may become cost-prohibitive to modify a system if the manufacturer does provide any assistance Yet sensitive data stored on legacy systems must be protected as the datarsquos lifetime is usually much longer than any manufacturerrsquos support period

Data protection model Modeling data protection methods as layers in a stack similar to how network engineers characterize interactions between hardware and software via the Open Systems Interconnect seven layer network model is a familiar concept9 In the data protection stack each layer represents a discrete protection2 responsibility while the boundaries between layers designate potential exploits Traditionally we define the following four discrete protection layers sorted in order of most general to most specific storage object database and data10

At each layer itrsquos important to apply some form of protection Users obtain permission from multiple sources for example both the local operating system and a remote authorization server to revert a protected item back to its original form We can briefly describe these four layers by the following diagram

Integrating Data Protection Into Legacy SystemsMethods And Practices Jason Paul Kazarian

2 We use the term ldquoprotectionrdquo as a generic algorithm transform data from the original or plain-text form to an encoded or cipher-text form We use more specific terms such as encryption and tokenization when identification of the actual algorithm is necessary

Layer

Application

Database

Object

Storage

Disk blocks

Files directories

Formatted data items

Flow represents transport of clear databetween layers via a secure tunnel Description represents example traffic

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 50: Connect Converge Spring 2016

47

bull Storage protects data on a device at the block level before the application of a file system Each block is transformed using a reversible protection algorithm When the storage is in use an intermediary device driver reverts these blocks to their original state before passing them to the operating system

bull Object protects items such as files and folders within a file system Objects are returned to their original form before being opened by for example an image viewer or word processor

bull Database protects sensitive columns within a table Users with general schema access rights may browse columns but only in their encrypted or tokenized form Designated users with role-based access may re-identify the data items to browse the original sensitive items

bull Application protects sensitive data items prior to storage in a container for example a database or application server If an appropriate algorithm is employed protected data items will be equivalent to unprotected data items meaning having the same attributes format and size (but not the same value)

Once protection is bypassed at a particular layer attackers can use the same exploits as if the layer did not exist at all For example after a device driver mounts protected storage and translates blocks back to their original state operating system exploits are just as successful as if there was no storage protection As another example when an authorized user loads a protected document object that user may copy and paste the data to an unprotected storage location Since HHS statistics show 20 of breaches occur from unauthorized disclosure relying solely on storage or object protection is a serious security risk

A-priori data protection When adding data protection to a legacy system we will obtain better integration at lower cost by minimizing legacy system changes One method for doing so is to add protection a priori on incoming data (and remove such protection on outgoing data) in such a manner that the legacy system itself sees no change The NIST FFX format-preserving encryption (FPE) algorithms allow adding such protection11

As an exercise letrsquos consider ldquowrappingrdquo a legacy system with a new web interface12 that collects payment data from customers As the system collects more and more payment records the system also collects more and more attention from private and state-sponsored hackers wishing to make illicit use of this data

Adding data protection at the storage object and database layers may be fiscally or technically (or both) challenging But what if the payment data itself was protected at ingress into the legacy system

Now letrsquos consider applying an FPE algorithm to a credit card number The input to this algorithm is a digit string typically

15 or 16 digits3 The output of this algorithm is another digit string that is

bull Equivalent besides the digit values all other characteristics of the output such as the character set and length are identical to the input

bull Referential an input credit card number always produces exactly the same output This output never collides with another credit card number Thus if a column of credit card numbers is protected via FPE the primary and foreign key relations among linked tables remain the same

bull Reversible the original input credit card number can be obtained using an inverse FPE algorithm

Now as we collect more and more customer records we no longer increase the ldquoblack marketrdquo opportunity If a hacker were to successfully breach our legacy credit card database that hacker would obtain row upon row of protected credit card numbers none of which could be used by the hacker to conduct a payment transaction Instead the payment interface having exclusive access to the inverse FPE algorithm would be the only node able to charge a transaction

FPE affords the ability to protect data at ingress into an underlying system and reverse that protection at egress Even if the data protection stack is breached below the application layer protected data remains anonymized and safe

Benefits of sharing protected data One obvious benefit of implementing a priori data protection at the application level is the elimination or reduction of risk from an unanticipated data breach Such breaches harm both businesses costing up to $240 per breached healthcare record13 and their customers costing consumers billions of dollars annually14 As the volume of data breached increases rapidly not just in financial markets but also in health care organizations are under pressure to add data protection to legacy systems

A less obvious benefit of application level data protection is the creation of new benefits from data sharing data protected with a referential algorithm allows sharing the relations among data sets without exposing personally identifiable information (PII) personal healthcare information (PHI) or payment card industry (PCI) data This allows an organization to obtain cost reduction and efficiency gains by performing third-party analytics on anonymized data

Let us consider two examples of data sharing benefits one from retail operations and one from healthcare Both examples are case studies showing how anonymizing data via an algorithm having equivalent referential and reversible properties enables performing analytics on large data sets outside of an organizationrsquos direct control

3 American Express uses a 15 digits while Discover Master Card and Visa use 16 instead Some store issued credit cards for example the Target Red Card use fewer digits but these are padded with leading zeroes to a full 16 digits

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 51: Connect Converge Spring 2016

48

For our retail operations example a telecommunications carrier currently anonymizes retail operations data (including ldquobrick and mortarrdquo as well as on-line stores) using the FPE algorithm passing the protected data sets to an independent analytics firm This allows the carrier to perform ldquo360deg viewrdquo analytics15 for optimizing sales efficiency Without anonymizing this data prior to delivery to a third party the carrier would risk exposing sensitive information to competitors in the event of a data breach

For our clinical studies example a Chief Health Information Officer states clinic visit data may be analyzed to identify which patients should be asked to contact their physicians for further screening finding the five percent most at risk for acquiring a serious chronic condition16 De-identifying this data with FPE sharing patient data across a regional hospital system or even nationally Without such protection care providers risk fines from the government17 and chargebacks from insurance companies18 if live data is breached

Summary Legacy systems present challenges when applying storage object and database layer security Security is simplified by applying NIST FFX standard FPE algorithms at the application layer for equivalent referential and reversible data protection with minimal change to the underlying legacy system Breaches that may subsequently occur expose only anonymized data Organizations may still perform both functions originally intended as well as new functions enabled by sharing anonymized data

1 Ransom J Somerville I amp Warren I (1998 March) A method for assessing legacy systems for evolution In Software Maintenance and Reengineering 1998 Proceedings of the Second Euromicro Conference on (pp 128-134) IEEE2 IBM Corporation ldquozOS announcements statements of direction and notable changesrdquo IBM Armonk NY US 11 Apr 2012 Web 19 Jan 20163 Cullen Drew ldquoBeyond the Grave US Navy Pays Peanuts for Windows XP Supportrdquo The Register London GB UK 25 June 2015 Web 8 Oct 20154 Microsoft Corporation ldquoMicrosoft Security Bulletinrdquo Security TechCenter Microsoft TechNet 8 Sept 2015 Web 8 Oct 20155 Kushner David ldquoThe Real Story of Stuxnetrdquo Spectrum Institute of Electrical and Electronic Engineers 26 Feb 2013 Web 02 Nov 20156 US Department of Health amp Human Services Office of Civil Rights Notice to the Secretary of HHS Breach of Unsecured Protected Health Information CompHHS Secretary Washington DC USA US HHS 2015 Breach Portal Web 3 Nov 20157 Comella-Dorda S Wallnau K Seacord R C amp Robert J (2000) A survey of legacy system modernization approaches (No CMUSEI-2000-TN-003)Carnegie-Mellon University Pittsburgh PA Software Engineering Institute8 Apple Computer Inc ldquoVintage and Obsolete Productsrdquo Apple Support Cupertino CA US 09 Oct 2015 Web9 Wikipedia ldquoOSI Modelrdquo Wikimedia Foundation San Francisco CA US Web 19 Jan 201610 Martin Luther ldquoProtecting Your Data Itrsquos Not Your Fatherrsquos Encryptionrdquo Information Systems Security Auerbach 14 Aug 2009 Web 08 Oct 201511 Bellare M Rogaway P amp Spies T The FFX mode of operation for format-preserving encryption (Draft 11) February 2010 Manuscript (standards proposal)submitted to NIST12 Sneed H M (2000) Encapsulation of legacy software A technique for reusing legacy software components Annals of Software Engineering 9(1-2) 293-31313 Gross Art ldquoA Look at the Cost of Healthcare Data Breaches -rdquo HIPAA Secure Now Morristown NJ USA 30 Mar 2012 Web 02 Nov 201514 ldquoData Breaches Cost Consumers Billions of Dollarsrdquo TODAY Money NBC News 5 June 2013 Web 09 Oct 201515 Barton D amp Court D (2012) Making advanced analytics work for you Harvard business review 90(10) 78-8316 Showalter John MD ldquoBig Health Data amp Analyticsrdquo Healthtech Council Summit Gettysburg PA USA 30 June 2015 Speech17 McCann Erin ldquoHospitals Fined $48M for HIPAA Violationrdquo Government Health IT HIMSS Media 9 May 2014 Web 15 Oct 201518 Nicols Shaun ldquoInsurer Tells Hospitals You Let Hackers In Wersquore Not Bailing You outrdquo The Register London GB UK 28 May 2015 Web 15 Oct 2015

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 52: Connect Converge Spring 2016

49

ldquoThe backbone of the enterpriserdquo ndash itrsquos pretty common to hear SAP or Oracle business processing applications described that way and rightly so These are true mission-critical systems including enterprise resource planning (ERP) customer relationship management (CRM) supply chain management (SCM) and more When theyrsquore not performing well it gets noticed customersrsquo orders are delayed staffers canrsquot get their work done on time execs have trouble accessing the data they need for optimal decision-making It can easily spiral into damaging financial outcomes

At many organizations business processing application performance is looking creaky ndash especially around peak utilization times such as open enrollment and the financial close ndash as aging infrastructure meets rapidly growing transaction volumes and rising expectations for IT services

Here are three good reasons to consider a modernization project to breathe new life into the solutions that keep you in business

1 Reinvigorate RAS (reliability availability and service ability) Companies are under constant pressure to improve RAS

whether itrsquos from new regulatory requirements that impact their ERP systems growing SLA demands the need for new security features to protect valuable business data or a host of other sources The famous ldquofive ninesrdquo of availability ndash 99999 ndash is critical to the success of the business to avoid loss of customers and revenue

For a long time many companies have relied on UNIX platforms for the high RAS that their applications demand and theyrsquove been understandably reluctant to switch to newer infrastructure

But you can move to industry-standard x86 servers without compromising the levels of reliability and availability you have in your proprietary environment Todayrsquos x86-based solutions offer comparable demonstrated capabilities while reducing long term TCO and overall system OPEX The x86 architecture is now dominant in the mission-critical business applications space See the modernization success story below to learn how IT provider RI-Solution made the move

2 Consolidate workloads and simplify a complex business processing landscape Over time the business has

acquired multiple islands of database solutions that are now hosted on underutilized platforms You can improve efficiency and simplify management by consolidating onto one scale-up server Reducing Oracle or SAP licensing costs is another potential benefit of consolidation IDC research showed SAP customers migrating to scale-up environments experienced up to 18 software licensing cost reduction and up to 55 reduction of IT infrastructure costs

3 Access new functionality A refresh can enable you to benefit from newer technologies like virtualization

and cloud as well as new storage options such as all-flash arrays If yoursquore an SAP shop yoursquore probably looking down the road to the end of support for R3 and SAP Business Suite deployments in 2025 which will require a migration to SAP S4HANA Designed to leverage in-memory database processing SAP S4HANA offers some impressive benefits including a much smaller data footprint better throughput and added flexibility

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 53: Connect Converge Spring 2016

50

Diana Cortesis a Product Marketing Manager for Integrity Superdome X Servers In this role she is responsible for the outbound marketing strategy and execution for this product family Prior to her work with Superdome X Diana held a variety of marketing planning finance and business development positions within HP across the globe She has a background on mission-critical solutions and is interested in how these solutions impact the business Cortes holds a Bachelor of Science

in industrial engineering from Universidad de Los Andes in Colombia and a Master of Business Administrationfrom Georgetown University She is currently based in Stockholm Sweden dianacorteshpcom

A Modernization Success Story RI-Solution Data GmbH is an IT provider to BayWa AG a global services group in the agriculture energy and construction sectors BayWarsquos SAP retail system is one of the worldrsquos largest with more than 6000 concurrent users RI-Solution moved from HPE Superdome 2 Servers running at full capacity to Superdome X servers running Linux on the x86 architecture The goals were to accelerate performance reduce TCO by standardizing on HPE and improve real-time analysis

With the new servers RI-Solution expects to reduce SAP costs by 60 percent and achieve 100 percent performance improvement and has already increased application response times by up to 33 percent The port of the SAP retail application went live with no expected downtime and has remained highly reliable since the migration Andreas Stibi Head of IT of RI-Solution says ldquoWe are running our mission-critical SAP retail system on DB2 along with a proof-of-concept of SAP HANA on the same server Superdome X support for hard partitions enables us to deploy both environments in the same server enclosure That flexibility was a compelling benefit that led us to select the Superdome X for our mission- critical SAP applicationsrdquo Watch this short video or read the full RI-Solution case study here

Whatever path you choose HPE can help you migrate successfully Learn more about the Best Practices of Modernizing your SAP business processing applications

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 54: Connect Converge Spring 2016

Looking forward to seeing you

51

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 55: Connect Converge Spring 2016

52

Congratulations to this Yearrsquos Future Leaders in Technology Recipients

T he Connect Future Leaders in Technology (FLIT) is a non-profit organization dedicated to fostering and supporting the next generation of IT leaders Established in 2010 Connect FLIT is a separateUS 501 (c)(3) corporation and all donations go directly to scholarship awards

Applications are accepted from around the world and winners are chosen by a committee of educators based on criteria established by the FLIT board of directors including GPA standardized test scores letters of recommendation and a compelling essay

Now in its fifth year we are pleased to announce the recipients of the 2015 awards

Ann Gould is excited to study Software Engineering a t I o w a S t a te U n i ve rs i t y i n t h e Fa l l o f 2 0 1 6 I n addition to being a part of the honor roll at her high schoo l her in terest in computer sc ience c lasses has evolved into a passion for programming She learned the value of leadership when she was a participant in the Des Moines Partnershiprsquos Youth Leadership Initiative and continued mentoring for the program She combined her love of leadership and computer science together by becoming the president of Hyperstream the computer science club at her high school Ann embraces the spir it of service and has logged over 200 hours of community service One of Annrsquos favorite ac t i v i t i es in h igh schoo l was be ing a par t o f the archery c lub and is look ing to becoming involved with Women in Science and Engineering (WiSE) next year at Iowa State

Ann Gould

Erwin Karincic currently attends Chesterfield Career and Technical Center and James River High School in Midlothian Virginia While in high school he completed a full-time paid internship at the Fortune 500 company Genworth Financial sponsored by RichTech Erwin placed 5th in the Cisco NetRiders IT Essentials Competition in North America He has obtained his Cisco Certified Network Associate CompTIA A+ Palo Alto Accredited Configuration Engineer and many other certifications Erwin has 47 GPA and plans to attend Virginia Commonwealth University in the fall of 2016

Erwin Karincic

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6
Page 56: Connect Converge Spring 2016

No of course you wouldnrsquot But thatrsquos effectively what many companies do when they rely on activepassive or tape-based business continuity solutions Many companies never complete a practice failover exercise because these solutions are difficult to test They later find out the hard way that their recovery plan doesnrsquot work when they really need it

HPE Shadowbase data replication software supports advanced business continuity architectures that overcome the uncertainties of activepassive or tape-based solutions You wouldnrsquot jump out of an airplane without a working parachute so donrsquot rely on inadequate recovery solutions to maintain critical IT services when the time comes

copy2015 Gravic Inc All product names mentioned are trademarks of their respective owners Specifications subject to change without notice

Find out how HPE Shadowbase can help you be ready for anythingVisit wwwshadowbasesoftwarecom and wwwhpcomgononstopcontinuity

Business Partner

With HPE Shadowbase software yoursquoll know your parachute will open ndash every time

You wouldnrsquot jump out of an airplane unless you knew your parachute

worked ndash would you

  1. Facebook 2
  2. Twitter 2
  3. Linked In 2
  4. C3
  5. Facebook 3
  6. Twitter 3
  7. Linked In 3
  8. C4
  9. Stacie Facebook
  10. Button 4
  11. STacie Linked In
  12. Button 6