information infrastructure, 2013 analyst(s): top 10 ... · pdf filetop 10 technology trends...

25
This research note is restricted to the personal use of [email protected] This research note is restricted to the personal use of [email protected] G00249318 Top 10 Technology Trends Impacting Information Infrastructure, 2013 Published: 19 February 2013 Analyst(s): Regina Casonato, Mark A. Beyer, Merv Adrian, Ted Friedman, Debra Logan, Frank Buytendijk, Massimo Pezzini, Roxane Edjlali, Andrew White, Douglas Laney Information management technologies and disciplines continue to challenge traditional assumptions. CIOs, and information leaders and architects need new skills to implement a modern information infrastructure and seize the opportunities in the second half of the information age. Key Findings Information is one of the Nexus of Forces changing business. In the Nexus, information is the context for delivering enhanced social and mobile experiences, and for connecting operational technology (OT) with IT. 1 Many new information management (IM) technologies require new skills and dedicated data scientists. These skills are rare and organic growth is required. Some critical roles will remain unfulfilled due to a scarcity of talents. In addition to the people, role and process aspects of enterprise information management (EIM), central to success is an enabling technology infrastructure that helps information producers and consumers organize, share and exchange any type of data and content, anytime, anywhere. Gartner calls this enabling technology infrastructure a modern information infrastructure. Enterprises that accelerate adoption of these top technology trends will be able to cope better with vastly increased volumes of information, as well as the increased velocity, variety and proliferation of use cases for information. Recommendations Hire or train more individuals with IM and organizational skills who can work crosscompany. Create training programs for business-based information managers and data stewards. Use Gartner's Information Capabilities Framework to assess your organization's maturity and gaps, and to establish a vision and road map for improving your information infrastructure (see Figure 1).

Upload: vandan

Post on 08-Mar-2018

218 views

Category:

Documents


4 download

TRANSCRIPT

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

G00249318

Top 10 Technology Trends ImpactingInformation Infrastructure, 2013Published: 19 February 2013

Analyst(s): Regina Casonato, Mark A. Beyer, Merv Adrian, Ted Friedman, Debra Logan, Frank Buytendijk,Massimo Pezzini, Roxane Edjlali, Andrew White, Douglas Laney

Information management technologies and disciplines continue to challengetraditional assumptions. CIOs, and information leaders and architects neednew skills to implement a modern information infrastructure and seize theopportunities in the second half of the information age.

Key Findings■ Information is one of the Nexus of Forces changing business. In the Nexus, information is the

context for delivering enhanced social and mobile experiences, and for connecting operational

technology (OT) with IT.1

■ Many new information management (IM) technologies require new skills and dedicated datascientists. These skills are rare and organic growth is required. Some critical roles will remainunfulfilled due to a scarcity of talents.

■ In addition to the people, role and process aspects of enterprise information management (EIM),central to success is an enabling technology infrastructure that helps information producers andconsumers organize, share and exchange any type of data and content, anytime, anywhere.Gartner calls this enabling technology infrastructure a modern information infrastructure.

■ Enterprises that accelerate adoption of these top technology trends will be able to cope betterwith vastly increased volumes of information, as well as the increased velocity, variety andproliferation of use cases for information.

Recommendations■ Hire or train more individuals with IM and organizational skills who can work crosscompany.

Create training programs for business-based information managers and data stewards.

■ Use Gartner's Information Capabilities Framework to assess your organization's maturity andgaps, and to establish a vision and road map for improving your information infrastructure (seeFigure 1).

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ Consider whether the main emerging trends in technology have been factored into yourstrategic planning process, but recognize that some of these top technologies could fail.

Table of Contents

Analysis..................................................................................................................................................2

Describe Information.........................................................................................................................4

Big Data..................................................................................................................................... 4

Integrate Information.........................................................................................................................6

Modern Information Infrastructure...............................................................................................6

Share Information............................................................................................................................. 8

Semantic Technologies...............................................................................................................8

The Logical Data Warehouse.................................................................................................... 10

Integrate/Store Information............................................................................................................. 13

NoSQL DBMSs........................................................................................................................ 13

In-Memory Computing..............................................................................................................14

Govern Information.........................................................................................................................18

Chief Data Officer and Other Information-Centric Roles............................................................ 18

Information Stewardship Applications....................................................................................... 20

Information Valuation/Infonomics.............................................................................................. 21

Recommended Reading.......................................................................................................................23

List of Tables

Table 1. Top Nine Technology Trends Likely to Impact Information Management in 2013.......................3

List of Figures

Figure 1. Gartner's Information Capabilities Framework.......................................................................... 7

Figure 2. Taxonomy of In-Memory Technologies.................................................................................. 16

AnalysisThis document was revised on 22 February 2013. The document you are viewing is the correctedversion. For more information, see the Corrections page on gartner.com.

Page 2 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Information is one of the four powerful forces changing the way business is done. In the Nexus ofForces, information is the context for delivering enhanced social and mobile experiences, and forconnecting the OT and IT worlds. Mobile devices are a platform for effective social networking andnew ways of working; social links people to their work and each other, in new and unexpectedways; and cloud enables the delivery of information and functionality to users and systems. TheNexus of Forces are intertwined to create a user-driven ecosystem of modern computing.

Significant innovation continues in the field of IM technologies and practices. The key factors drivingthis innovation are the explosion in the volume, velocity and variety of information, and the hugeamount of value — and potential liability — locked inside all this ungoverned and underusedinformation.

However, the growth in information volume, velocity, variety and complexity, and new informationuse cases, makes IM infinitely more difficult than it has been. In addition to the new internal andexternal sources of information, practically all information assets must be available for deliverythrough varied, multiple, concurrent and, in a growing number of instances, real-time channels andmobile devices. All this demands the ability to share and reuse information for multiple contextdelivery and use cases. More importantly, it demands new skills and roles.

For 2013, we have identified nine technology trends that will play different roles in modernizing IM(see Table 1). Some help structure information; some help organize and mine information; someintegrate and share information across applications and repositories; some store information; andsome govern information. These 2013 top technology trends make the role of informationgovernance increasingly important.

This document updates Gartner's analysis of the top technology trends in IM. These technologiesare strategic, because they directly address some of the broad challenges faced by enterprises (see"Information Management in the 21st Century").

Table 1. Top Nine Technology Trends Likely to Impact Information Management in 2013

Purpose Technology Trend

Describe information Big data

Integrate information Modern information infrastructure

Share information Semantic technologiesThe logical data warehouse

Integrate/store information NoSQL DBMSsIn-memory computing

Govern information Chief data officer and other information-centric rolesInformation stewardship applicationsInformation valuation/infonomics

Source: Gartner (February 2013)

Gartner, Inc. | G00249318 Page 3 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

This document should be read by CIOs, information architects, information managers, enterprisearchitects and anyone working in a team evaluating or implementing:

■ An enterprise content management system

■ A business intelligence (BI)/analytic application

■ A master data management (MDM) technology

■ A document-centric collaboration initiative

■ An e-discovery effort

■ An information access project

■ A data warehouse (DW) system

In this document, the main factors that Gartner expects to have the biggest impact in the next 12 to18 months are discussed. Also, recommendations are given about responding to these factors andhow to improve IT strategies.

Describe Information

Big Data

Analysis by Merv Adrian and Mark Beyer

Gartner estimates that big data will generate $232 billion in revenue cumulatively from 2011 to 2016(see "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending Through2016"). Gartner defines big data as "high-volume, -velocity and -variety information assets thatdemand cost-effective, innovative forms of information processing for enhanced insight anddecision making."

Big data warrants innovative processing solutions for a variety of new and existing data, to providereal business benefits. But processing large volumes or wide varieties of data, remains merely atechnological solution, unless it is tied to business goals and objectives. The only reason forpursuing big data is to deliver against a business objective. For example, knowing how a largepopulation responds to an event is useless unless a business can benefit from influencing that eventor its outcomes. New forms of processing are not necessarily required, nor are new forms ofprocessing always the least expensive solution (least expensive and cost-effective are two differentthings). The technical ability to process more varieties of data in larger volumes is not the payoff.

The most important aspects of big data are the benefits that can be realized by an organization.Increasingly diverse datasets complement each other and permit businesses to fill in missing gapsin the body of information. Filling these gaps improves operations and decisions, and enhancesbusiness delivery. There are new types of information asset that warrant processing innovations.These assets are not necessarily bigger, but the value of combining them — with each other andexisting assets — requires bigger processing and management capabilities. As more assets arecombined, the tempo of record creation, and the qualification of the data within use cases, become

Page 4 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

more complex. Marketers promoting software, services and hardware, frequently attempt to isolateone or two aspects of big data to create demand for their offerings. Big data information assetsthemselves are only one part of the issue. The other is the demand for innovative, cost-effectiveforms of processing — from capture, through storage, into access and all the way to analytics and,ultimately, data management — throughout the life cycle. Information has different quality, securityand access requirements at different points of its life cycle. These differences create much of thecomplexity of big data.

It's no longer news that vendors are almost universally claiming that they have a big data strategy orsolution. However, Gartner clients have made it clear that big data must include large volumes,processed in streams and batches (not MapReduce). They need both, as well as an extensibleservices framework that can deploy processing to the data, or bring data to the process, and whichspans more than one variety of asset type (that is, not just tabular, stream or text). Gartner researchindicates that in 2012, the cost ratio of delivering big data services compared to using software tomanage and analyze big data, was almost 15-to-1. Therefore, the focus of big data efforts is onsolution delivery, not software purchase. Also, organizations addressing big data should insist onbig data vendors providing the required functionality. Initially, vendors will discount such demandsbecause they are rare, but they will become more commonplace and force product road mapchanges (see "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT SpendingThrough 2016"). Products with established functionality, and possibly those already in beta testingstatus, should be accepted as only partial solutions. Organizations also need to focus on thespecific features that lower the services to software/hardware cost ratios to less than 15-to-1,aiming for, at most, 10-to-1. For example, Gartner anticipates that by 2015 this ratio will approach8-to-1, based on anticipated advances in software tools. Organizations should ask referencecustomers what their actual spend on internal staffing is, and what the costs for the necessaryincrease in staff, and consulting and implementation services charges from vendors, will be,compared to any software and hardware costs. There have been a large number of inquiries withGartner about this topic, and this is evidence of growing hype in the market. Also, aspects andtypes of big data have been around for more than a decade. It is only the recent market hype aboutlegitimate new techniques and solutions that has created this increased demand.

Enterprises should be aware of the many big data use cases. Addressing all the extreme aspects of21st-century IM permits better analysis of all the available data, and the detection of even thesmallest details in the body of information. This is a precursor to the effective use of Pattern-BasedStrategy and the new type of applications this enables. In the case of complex event processing,queries are complex because of the many different feeds, and volume can be high or low. Also,velocity varies from high to low. Analytics can leverage technology, such as MapReduce, to accessdata in external Hadoop Distributed File System (HDFS) files, BI in-database, or as service callsmanaged by the DBMS.

Recommendations:

■ Find out where big data work is already going on in your organization and leverage it. Someindustries, such as finance, telecommunications and government, have already implementedmultiple big data projects.

Gartner, Inc. | G00249318 Page 5 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ Identify if there are business use cases for existing business processes, or ones that enablenew business processes. Determine new information asset types that support the businesscases, or "dark data" that can be accessed and analyzed.

■ Start to determine your architectural standards for addressing big data. Organizations that havenot addressed the more traditional requirements of storage, processing and informationarchitecture, need to carefully compare the use of big data solutions with more traditional ones.

■ As you consider big-data-vendor-proffered solutions, request and review reference customersthat have used them. Determine their professional staffing costs and compare them to the costof new software/hardware. Secure only those solutions that obtain better than 15-to-1 ratios,and specifically seek those with better than 10-to-1 customer-reported ratios.

■ Apply the tests to volume, variety and velocity. If volume is increasing as described, or variety isexpanding, or velocity is creating temporal challenges in data, then you have a big dataopportunity. Be aware that any combination of two "Vs" forces increased complexity in theanalysis and the temporal relationships, and creates data availability issues.

Integrate Information

Modern Information Infrastructure

Analysis by Ted Friedman and Mark Beyer

IM is a discipline that requires action in many different areas, most of which are not technology-specific (see "Gartner's Enterprise Information Management Framework Evolves to Meet Today'sBusiness Demands"). In addition to these softer aspects of EIM, central to success is an enablingtechnology infrastructure that helps information producers and information consumers organize,share and exchange any type of data and content, anytime, anywhere. This enabling technologyinfrastructure is what Gartner calls a modern information infrastructure. The informationinfrastructure consists of various IM-related technologies that are deployed in the enterprise, and ismeant to support the provisioning of all kinds of data — from producers of that data, to the variousconsuming applications and processes. For most organizations, information infrastructure is notplanned and managed in a proactive or cohesive way. As such, it often has overlaps and createsredundancies, gaps and discrete silos that support only limited use of data. For organizations to besuccessful in the 21st century, a different approach to information infrastructure is required, and IMpresents an opportunity for introducing a new level of architectural discipline. However, it isimportant that business process management (BPM), application development and enterprisearchitecture disciplines, can leverage this approach, which will have different categories ofcapabilities and may need to be developed separately.

Because it must support a wide range of information use cases and information types, it is essentialthat information infrastructure be viewed as strategic, so that a vision to develop it in a cohesiveand aligned way over time is possible. Gartner's Information Capabilities Framework representssuch a vision. This framework (see Figure 1) describes the structures that lie between the manydifferent sources of information (such as transactional data and social information) and the usecases for that information (such as analytics and transactions). It includes common capabilities for

Page 6 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

using information — expressed as a set of verbs (describe, organize, integrate, share, govern andimplement) — and specialized capabilities, as well as the standards, semantics and metadataschemes that give information these capabilities (see "The Information Capabilities Framework: AnAligned Vision for Information Infrastructure"). Also, the Information Capabilities Framework is amodel describing the range of technology capabilities that are necessary to deliver a solidinformation infrastructure, and the manner in which those capabilities are integrated and exposed tovarious information use cases and applications. Organizations that establish a road map for thistype of cohesive, application-independent and information-source-independent set of IMtechnology capabilities, are best placed to achieve long-term EIM goals.

Figure 1. Gartner's Information Capabilities Framework

Manage Metadata

Specialized Capabilities

Information Semantic Styles

Dedicated RegistryConsoli-dation

AuditingOrches-tration

External

Common Capabilities

Describe GovernOrganize Integrate ImplementShare

• Discover• Model• Profile• Identify• Value• Abstract

• Quality Assure

• Protect• Manage Life Cycle

• Reconcile (Semantics)

• Categorize

• Relate• Aggregate• Resolve (Entities)

• Build• Test

• Propagate• Optimize

Provision

• Standardize• Publish

Enrich

• Secure• Retain• Monitor

• Format/ Structure

• Persist

• Synchro-nize

• Ingest

• Administer• Back up/ Restore

• Access• Reformat• Deliver• Transform

Operate

Search

Source: Gartner (February 2013)

Gartner, Inc. | G00249318 Page 7 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

However, most organizations are a long way from these ideals in terms of the state of theirinformation infrastructure. Looking at activities in numerous IT functions from various industries, wehave identified common opportunities where organizations can strengthen their capabilities, orbegin to align and deploy the relevant technology in a more focused, consistent and shared way. In"Ten Ways to Strengthen Your Information Infrastructure," we describe 10 ways in which existinginformation infrastructures can be improved in line with the Information Capabilities Frameworkvision, and offer recommendations for how organizations can begin to act in these areas to developa modern information infrastructure.

Recommendations:

■ Use Gartner's Information Capabilities Framework to assess your organization's maturity andgaps, and establish a vision and road map for increasing the effectiveness of your informationinfrastructure.

■ Look at the current and anticipated range of information use cases to identify which capabilitieswill be most important and the highest priority.

■ Focus on capabilities supporting the development of models that provide transparent meaning,authority and workflow, for all critical information assets in the enterprise.

■ Position information infrastructure as a strategic component of your EIM initiatives, to securethe requisite support and resources.

Share Information

Semantic Technologies

Analysis by Debra Logan and Frank Buytendijk

Semantic technologies extract meaning from data, ranging from quantitative data and text, to video,voice and images. Semantic technologies also include systems that aid the organization andclassification of electronic information, adding meaning to existing data. A third category ofsemantic technologies is machine-learning paradigms. Semantic technologies also includeautomated decision making, such as expert diagnostic systems and decision rules for dataclassification. Here, "meaning" describes the extent to which constructs that are "rich logic," andmeaningful to people, can also be interpreted by computers. As a consequence, the extent to whicha technology has semantic capabilities is scalable.

Programming languages are poor semantic technologies, because they have little human meaningunless a person is trained. Data model diagrams are also semantically poor — the logic in themdoesn't have a rich set of expressions. Ontology-driven software, describing relationships betweendata elements, scores higher on the semantic scale. Computers can infer and use new insightsthrough the discovery of relationships between data elements, in a way that is also meaningful andunderstandable to users. Similarly, computers can analyze and group documents based onexamples given by humans. This is a form of machine learning that is being increasingly used whenlarge sets of documents must be somehow reviewed, for example, in legal proceedings.

Page 8 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Semantic analysis ranges from the simple — such as grouping things into categories — to thecomplex — building highly specific models to represent the "real world" and the relationships ofgroups of things in the real world, for example, diagnostic expert systems. Semantic technologiesinclude:

■ Ontologies

■ Taxonomies

■ Entity resolution

■ Classification

■ Categorization and tagging

■ Content analytics and clustering

■ Content monitoring and filtering

■ Predictive document coding

■ Expert systems

■ Neural networks

■ Various kinds of decision support systems

Semantic processing often uses a metadata layer, through a level of abstraction, typically in theform of a metadata model. There are two main ways to build these semantic metadata models.Firstly, the traditional way is by using a design process. After design and implementation, the userjust sees the model, and the model is imposed on the data. (BI tools function in a similar, oftenlabor-intensive, way.) Secondly, another possibility is by using automated discovery techniques.Through content analysis (including data, text, audio and video), models describing apparentrelationships are created, modified and maintained.

Many of these techniques have existed for years and are based on advanced statistics, data mining,machine learning and knowledge management. One reason they are garnering more interest is therenewed business requirement for monetizing information as a strategic asset. Even more pressingis the technical need. Increasing volumes, variety and velocity — big data — in IM and businessoperations, requires semantic technology that makes sense out of data for humans, or automatesdecisions. Examples include:

■ MarkLogic, which claims its semantic services can be used to provide a contextual userexperience based on information for any type of user experience (for example, desktop, tabletor phone).

■ AlchemyAPI offers a text-mining platform that features semantic analysis capabilities in thenatural-language processing (NLP) field.

■ Oracle-Collective Intellect uses latent semantic analysis and NLP to understand conversationsand assign sentiments to them.

Gartner, Inc. | G00249318 Page 9 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ SAS's products include systematic linkages across text repositories using semanticrelationships.

Semantic technologies are found in many types of enterprise software:

■ Analytics

■ IM

■ Databases

■ Business applications

■ BPM

■ Governance, risk and compliance

■ Security

They are increasingly being used for data classification and tagging. This is to clean up legacyinformation, because large enterprises are increasingly striving to gain control of their data storagecosts, meet legal and regulatory requirements, or simply find what is useful and relevant among thevast amount of redundant, outdated and trivial data. Using semantic technologies for buildingtaxonomies and ontologies is already becoming more accepted. We estimate that a still small (butrapidly growing) number of large organizations are doing this. Automated discovery techniques arebeing used significantly less, but leading organizations have already started experimenting with it.Vendors, both the leading ones in enterprise software, as well as many small best-of-breed parties,will start promoting semantic technology before 2015.

Recommendations:

Consider using semantic technologies when one or more of the following criteria apply:

■ You have massive volumes of data or a large amount of documents that need categorization.

■ There are expensive and recurring data quality issues.

■ Content security issues present a risk to your company.

■ Common semantic models are widely adopted in your industry.

■ Or, conversely, there is competitive advantage in exploiting semantic technologies as part ofyour big data initiatives.

The Logical Data Warehouse

Analysis by Mark Beyer

DW architecture is undergoing an important evolution, compared with the relative stasis of theprevious 25 years. Although the term "data warehouse" was coined around 1989, the architecturalstyle predated the term (for example, at American Airlines, Frito-Lay and Coca-Cola).

Page 10 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

At its core, a DW is a negotiated, consistent and logical model populated by using predefinedtransformation processes. Over the years, the various options — centralized enterprise DWs,federated marts, hub-and-spoke arrays of central warehouses with dependent marts, and virtualwarehouses — have all emphasized certain aspects of the service expectations required for a DW.The common thread running through all the styles is that they were repository-oriented. This,however, is changing. The DW is evolving from competing repository concepts, to include fullyenabled data management and information processing platforms. These new warehouses force acomplete rethink of how data is manipulated, and where in the architecture each type of processingoccurs that supports transformation and integration. It also introduces a governance model that isonly loosely coupled with data models and file structures, as opposed to the very tight, physicalorientation used before.

This new type of warehouse — the logical DW — is an IM and access engine, which takes anarchitectural approach and de-emphasizes repositories in favor of new guidelines:

■ The logical DW follows a semantic directive to orchestrate the consolidation and sharing ofinformation assets, as opposed to one that focuses exclusively on storing integrated datasets.The logical DW is highly dependent on the introduction of information semantic services.

■ The semantics are described by governance rules from data creation and use case businessprocesses in a data management layer. This is instead of going through a negotiated, statictransformation process located within individual tools or platforms.

■ Integration leverages steady-state data assets in repositories and services in a flexible, auditedmodel, using the best available optimization and comprehension solutions.

To develop a logical DW, enterprises should start by introducing the logical DW concepts used fordynamic consolidation, integration and implementation (see "Does the 21st-Century 'Big Data'Warehouse Mean the End of the Enterprise Data Warehouse?").

The biggest changes in 2012 regarding logical DW practices and adoption were:

■ A better understanding of the separation of service-level expectations in the warehouse, whichcreated clear SLAs for technology. Repositories are now used for pervasive, persistent andlatency-tolerant analytics. Federation/virtualization techniques are used for fluctuating data andquery models, and/or demands for current data, direct from operational sources. Also,distributed processing is becoming the preferred architecture for comprehensive analytics (see"Understanding the Logical Data Warehouse: The Emerging Practice").

■ Client implementation practices indicate that the old "80/20" rule of analytics data management(in which 80% of end-user needs can be met with 20% of the available enterprise data, which isdelivered under a single logical data model) is giving way to a new rule. Gartner calls this the"80/10/5" rule. For 80% of user needs, the repository follows the long-standing 80/20 rulealready highlighted. However, 10% of user needs generally appear in the federation/virtualization model, and 5% of end-user analytic data management needs are met throughdistributed processes. The remaining, or undeclared, 5% of analytic needs have not yet been

Gartner, Inc. | G00249318 Page 11 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

specified for a preferred technology deployment. Also, they are usually managed and used bydata scientists conducting investigations.

■ Federation tools and DBMSs have begun to compete for the coordination role in determiningthe analytic applications entry point to logical DWs. The DBMS provides federation capabilitythrough in-built capabilities, such as accessing external tables and foreign data types. Itscapability of utilizing long-standing and highly successful performance optimization techniques,such as parallelization, allows it to serve as a stable basis for incorporating new processingmodels. At the same time, the ability to store functionality — as user-defined functions, storedprocedures and via SQL extension frameworks — to add new processing instructions, allowsDBMSs to continue extending their data management domains. Federation tools arechallenging DBMSs for the information access and coordination role, by offering enhancedconnectivity, multitiered caching, incremental caching and logical modeling capabilities that canbe extended to unstructured data calls.

■ Logical DW adoption is well under way. In our recent Magic Quadrant customer survey for theDW DBMS market, leading DW implementers (representing 15% to 20% of DW practitioners inthe market) reported that 49% of their warehouses have introduced Hadoop or NoSQL aspreprocessor platforms for data integration (a logical DW characteristic), or will introduce thistype of solution in the next 12 months. Similarly, 19% of leading implementers report that theyare combining their existing relational warehouse with Hadoop or NoSQL (a logical DWcharacteristic) solutions to create blended warehouses. At the same time, only 11% of theseimplementers report that they will completely replace existing warehouses with Hadoop clustersor NoSQL solutions (a competing architecture for analytics data management). Thisdemonstrates that the logical DW is becoming the preferred best practice in the marketplace,more so than full replacement, by a wide margin (see the forthcoming document "The Future ofData Management for Analytics Is the Logical Data Warehouse").

Note that with the logical DW approach, the differing styles of support — such as federation, datarepositories, messaging and reduction — are not mutually exclusive. They are just manifestations ofdata delivery. The focus is on getting the data first, then figuring out the delivery approach that bestachieves the required SLA with the querying application. The transformations occur in separateservices (see "The Logical Data Warehouse Will Be a Key Scenario for Using Data Federation").

Recommendations:

■ Start your evolution toward having a logical DW by identifying data assets that are not easilyaddressed by traditional data integration approaches and/or easily supported by a "singleversion of the truth." Consider all technology options for data access and do not focus solely onconsolidated repositories. This is especially relevant for big data issues.

■ Identify pilot projects in which to use logical DW concepts, by focusing on highly volatile andhighly interdependent business processes.

■ Use a logical DW to create a single, logically-consistent information resource that isindependent of any semantic layer, and specific to a particular analytic platform. The logical DWshould manage reused semantics and reused data.

Page 12 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Integrate/Store Information

NoSQL DBMSs

Analysis by Merv Adrian

NoSQL DBMSs — key-value stores, document-style stores, and table-style and graph databases —are designed to support new transaction, interaction and observation use cases involving Webscale, mobile, cloud and clustered environments. Most NoSQL offerings, unintended for typicaltransactional applications, do not provide any atomicity, consistency, isolation or durabilityproperties. Interest in NoSQL within the programmer community has expanded — customer counts,use cases and download volumes are increasing, and packed conferences are springing up inmultiple geographies. Adoption is increasing as commercial providers add increasing functionality,support, training and community building. Job listings and inquiries to Gartner also reflect this risein interest.

NoSQL DBMS usage continues to be driven by programmers, not the typical database team. Thelimitations of batch-style MapReduce-on-HDFS, are driving increased interest in NoSQL data storesfrom Hadoop distributors. For example, specialists like Cloudera and megavendors like IBM andEMC, are reporting increased use of HBase.

Another trend is the continued growth of distributions to include other developing open-sourceprojects, such as DataStax's addition of Apache Solr to its Cassandra-based Hadoop distribution.Big vendors are responding. Oracle's new Oracle NoSQL Database 11g, derived from BerkeleyDB,is a core element of its Big Data Appliance; Amazon's DynamoDB became available in January2012, representing a significant trade-up from its SimpleDB. Also, many NoSQL offerings arehosted in the cloud by Amazon and others. Microsoft and IBM remain on the sidelines, althoughIBM has made a tentative probe into the graph database market with triplet and Sparql support inDB2 v.10.

Increasing adoption and growing customer demands have opened up a significant gap betweencommercially-supported NoSQL DBMSs and open-source projects that only have communitysupport. The latter remain immature and are used by Web developers for applications that are notmainstream. Commercial products are using their added funding not only to build sales, supportand marketing, but also to add enterprise-class features intended to widen adoption and win newbusiness. For example, 10gen claims its MongoDB Management Service has over 5,000 users, andCouchbase's CouchSync targets integration between cloud and mobile devices.

The growth of the ecosystem — partners and supporting products — will have an impact onbroadening adoption. Dell-Quest Software reports that its Toad for Cloud Databases — withsupport for HBase and Mongo, as well as Cassandra and Amazon SimpleDB — is gaining tractionin shops that use its tools for mainstream DBMSs. Informatica 9.5, which came onto the market inMay 2012, added support for Hadoop source data integration. Added support for leading NoSQLtargets will not be far behind. Hosting players — not just Amazon and Rackspace, but specialistslike MongoLab — offer a lower cost of entry and a place to experiment.

Gartner, Inc. | G00249318 Page 13 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Continuing market penetration, and follow-on project growth from existing customers, is alsoexpected in 2013. Nonetheless, despite offerings and ecosystems continuing to mature, there is along way to go. Awareness is still limited and the leading players remain off the direct sales playingfield, slowing their penetration of corporate IT strategic plans. As a result, business impact in 2012was moderate, but, in 2013, is increasing as more organizations investigate and experiment.Decisions about how to persist data for many new-wave applications are being made by a newgeneration of programmers. These programmers are willing to use special-purpose NoSQL datastores that make their work less complex and provide greater flexibility. Notably, programmersshould not drive enterprise procurement, but their influence is being felt.

The emergence of in-memory DBMSs (IMDBMSs) has led to the potential for capturing many of thesame use cases. However, language choices and commercial considerations may make NoSQL anoption, despite IMDBMS success. Organizational unwillingness to build on open source, has been amain block to its success, but the rise of commercializers is beginning to shift perceptions. Theavailability of data integration tools that offer an understanding of data structure in relational DBMSs— to support data movement into other systems (such as DWs and other applications) — will be akey enabler. NoSQL vendors are pursuing integration with more data management processes, andthe recent emergence of Apache HCatalog — a metadata standard for the Hadoop stack — willaccelerate development in this area.

Recommendations:

■ For multi-user, complex applications requiring high-performance transactional capability,NoSQL DBMSs are not an option, due to their lack of atomicity, consistency, isolation anddurability properties, and should not be considered.

■ For Web-scale applications requiring large data stores with high performance — especiallywhen transactions are read-only or not complex, and can be supported by a nonatomicity,consistency, isolation and durability model — some of the more mature NoSQL DBMSs can beused. For transactions that do not require atomicity, consistency, isolation and durabilityproperties, and have complex, mixed data types, these can be very effective.

■ Commercial NoSQL DBMSs can be used for use cases that are well-served by graph,document or key-value architectures. NoSQL DBMSs are well suited for applicationsexpected to have frequent updates. Scale and new requirements will demand rapid iteration.

In-Memory Computing

Analysis by Massimo Pezzini and Roxane Edjlali

In-memory computing is an emerging paradigm, enabling user organizations to developapplications that run advanced queries on very large datasets, or perform complex transactions atleast one order of magnitude faster (and in a more scalable way) than when using conventionalarchitectures. This is achieved by storing application data in-memory (that is, in the computer'smain memory), rather than on electromagnetic disks, without compromising data availability,consistency and integrity.

Page 14 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

In-memory computing opens unprecedented and partially unexplored opportunities for businessinnovation (for example, via real-time analysis of big data in motion) and cost reduction (forexample, through database or mainframe off-loading). However, until recently, only the most deep-pocketed and technologically-savvy organizations (in verticals like financial trading,telecommunications, the military and defense, online entertainment, and logistics) could afford thehigh costs and deal with the complexities of adopting an in-memory computing approach.

Currently, the forces driving mainstream users to increasingly look at in-memory computing as anaffordable paradigm, are the dramatic and never-ending decline in DRAM and NAND flash memoryprices. Added to this is the availability of commodity multicore 64-bit microprocessors, which candirectly address extremely large main memories (theoretically up to one billion gigabytes) andconcurrently parallel-process large datasets in a computer's main memory.

However, equally important for the widespread adoption of in-memory computing, is the availabilityof application infrastructure software that makes it possible for application developers to takeadvantage of the hardware potential in a reasonably user-friendly way. Gartner has identified avariety of application infrastructure technologies that make available APIs, programming models,infrastructure services and tools to address this requirement. In some cases, these technologiesmap well-known paradigms (for example, DBMSs) on an in-memory set of data structures. In othercases (for example, complex event processing platforms) they implement a totally newprogramming paradigm, enabled by the in-memory computing paradigm of low-latency access tovast arrays of in-memory data. In some cases, these software technologies are still in the earlystages of their life cycle, and are therefore immature and not well understood. But in other cases,they are already widely used (some products have thousands and even tens of thousands of users)and are rapidly achieving adoption by mainstream, risk-averse organizations.

Gartner has identified a number of different in-memory technologies, each used for differentpurposes and contributing in different ways to the overall infrastructure landscape (see Figure 2 and"Taxonomy, Definitions and Vendor Landscape for In-Memory Computing Technologies"). Databaseadministrators and IT architects will need to explore which combination of in-memory technologieswill be required to meet their business needs.

Gartner, Inc. | G00249318 Page 15 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Figure 2. Taxonomy of In-Memory Technologies

In-Memory Data Management

IMDBMS In-Memory Data Grid

In-Memory (Low-Latency) Messaging

In-Memory Application Platforms

Event Processing Platforms

In-Memory Analytics

In-Memory Application Server

In-Memory-Enabled Applications

Memory "Intensive" Computing Platform(DRAM, Flash, SSD, Multicore, InfiniBand, Clusters, Grid, Cloud)

IMDBMS = in-memory DBMS; SSD = solid-state drive

Source: Gartner (February 2013)

Business drivers for in-memory computing adoption include:

■ The incessant quest for faster performance and greater scalability that is required for 24/7support, and global scale, in Web and mobile-enabled businesses.

■ The desire to analyze in real time increasing volumes of data at a very fine-grained level ofdetail.

■ The necessity for obtaining real-time business insights that support operational decisionsupport.

■ The appetite for unconstrained data visualization and exploration delivery to a wide audience ofbusiness users.

Factors inhibiting in-memory computing usage include:

■ High software license costs and uncertain ROIs

■ Scarcity of in-memory computing development and IT operation skills

■ A lack of standards

■ Limited availability of best practices

■ The fragmented technology landscape

Page 16 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ An overcrowded market

■ Skepticism about new paradigms

Mostly, these are not structural factors, but are rather related to the still-turbulent state of the in-memory computing industry, and vendors' desire to maximize revenue. Therefore, the drivers willeventually prevail over the inhibitors, which will lead in-memory computing to widespread adoptionby user organizations of any size, in any industry and geography. Further favoring this trend is agrowing use of in-memory computing technologies (for example IMDBMSs or in-memory data grids)"inside" a variety of hardware and software products that are designed to improve theirperformance, scalability and availability. Such an "in-memory computing inside" approach istypically implemented in a way that minimally impacts the applications leveraging these products.Therefore, in many cases, users can experience the improved quality of service (QoS) benefitsderived from in-memory computing, with minimal impact on their established applications and skills.

The in-memory computing application infrastructure technologies that are attracting the most userinterest, or that are most widely deployed in real-life production, are:

■ IMDBMSs. This is a DBMS that stores the entire database structure in-memory, and accessesit without the use of input/output instructions. (It has all the necessary structure in-memory, andis not simply an in-memory disk-block cache.) IMDBMSs can also use flash memory or disk-based storage for persistence and logging. Products in the market are specialized for analyticalor online transaction processing applications. However, some vendors are trying to createhybrid products that are equally capable of covering both scenarios.

■ In-memory data grids. These provide a distributed, reliable, scalable and consistent in-memory data store — the data grid — that is shareable across distributed applications. Theseconcurrently perform transactional and analytical operations in the low-latency data grid, whichminimizes access to high-latency, disk-based data storage. In-memory data grids maintaindata-grid consistency, availability and durability by using replication and partitioning. Althoughthey can't be classified as full application platforms, in-memory data grids can also hostapplication code.

■ High-performance message infrastructures. These provide program-to-programcommunication with a high QoS, including assured delivery and security. They also leverageinnovative design paradigms to support higher throughput (in messages per second), lowerlatency (in milliseconds for end-to-end delivery time) and more message producers (senders)and consumers (receivers), than traditional message-oriented middleware products.

■ In-memory analytics platforms. This is an alternative BI performance layer in which detaileddata is loaded into memory, for the fast query and calculation of large volumes of data. Thisapproach removes the need to manually build relational aggregates and generate precalculatedcubes, to ensure analytics run fast enough for users' needs. In-memory analytics platformsusually incorporate (or integrate with) an underlying in-memory data store.

■ Complex event processing. This is a kind of computing in which incoming data about eventsis distilled into more useful, higher-level complex event data that provides insight into what ishappening. It is event-driven because the computation is triggered by the receipt of event data.

Gartner, Inc. | G00249318 Page 17 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

It is used for highly demanding, continuous-intelligence applications that enhance situationawareness and support real-time decisions. Also, it can be supported by specialized, in-memory-based complex event processing platforms.

■ In-memory application servers. These are innovative application platforms designed tosupport high-performance/high-scale enterprise- and global-class applications, by combiningin-memory technologies, scale-out architectures and advanced, often event-based,programming models. In-memory application servers usually incorporate (or integrate with) anunderlying in-memory data store to hold the database of records, provide distributedtransactions, and implement high-availability and elastic scale-out architectures throughsophisticated, in-memory-enabled clustering techniques.

Recommendations:

■ Brainstorm with business leaders about projects that target business growth or plannedtransformational initiatives. Explore how in-memory computing can provide breakthroughsupport for these.

■ Define a holistic in-memory computing strategy for your company, addressing long-termstrategic business benefits and short-term tactical goals. However, be prepared (in terms ofskills and organizational setting) to support different in-memory computing technologies thataddress different business needs.

■ Partner with the business to understand what delivers greatest business insight and competitivedifferentiation, in terms of the ability to analyze, and much more responsively explore, data viain-memory analytics and an in-memory DBMS.

■ Invest in data governance processes and tools to avoid a wild proliferation of in-memory datathat could lead to data chaos.

Govern Information

Chief Data Officer and Other Information-Centric Roles

Analysis by Debra Logan and Ted Friedman

EIM requires dedicated roles and specific organizational structures. Specific roles — such as chiefdata officer, information manager, information architect and data steward — will be critical formeeting the goals of an EIM program. The fundamental objectives of the roles remain constant: tostructure and manage information throughout its life cycle, and to better exploit it for risk reduction,efficiency and competitive advantage.

We are increasingly seeing information-focused roles develop within specific business functions. BI/analytics, data quality, data warehousing, big data initiatives, and a general desire to improve,correct and align reporting and metrics, are common reasons for creating such roles. In addition,many examples have been emerging in the area of legal and regulatory compliance, where theabsolute business need to produce electronic information has forced IT and legal departments towork together to create crossfunctional roles. Some leading organizations have been creating a new

Page 18 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

chief data officer role aimed at maximizing the value and use of data across the enterprise, and tomanage the associated risks. Currently, the chief data officer tends to be more aligned withbusiness than IT, but has significant influence over both. A Gartner Research Circle panel survey ofmore than 300 organizations, conducted in July 2012, found that only 7% had a chief data officer.Our secondary research scans concur and find that most existing chief data officers are in the U.S.However, over the next five years, enterprises with a high information element in their core value-creating activities, will experience an increasing need for a single corporate leader of informationpolicy and strategy. The need will be more acute in those industries where the effect of regulationcompels action to be taken on information transparency for compliance or e-discovery, to defendagainst legal attacks. There are also emerging roles around exploiting the value of information,which may also have the title of CDO or chief digital officer.

Demand for data steward roles is growing. According to the survey from July 2012, Gartner clientsexpect recruitment of these roles to increase by 21% over the next two years. These roles are unlikethose currently employed by business units (BUs), because they will need to have domain expertiseand technical knowledge, combined with an understanding of information classification andorganizational techniques. In most companies, these roles and job titles vary and are only just nowcoming into existence — most still retain business-oriented titles. The enterprises that are movingfirst to create these roles, and to train for them, will be the first to benefit from informationexploitation. IT is changing — this has become the accepted wisdom. The questions that we arenow answering are "how is IT changing?" and "what new roles will be required to prepare for thechange?" Some of the roles that we have seen are:

■ Data stewards

■ Information managers

■ Digital archivists

■ Knowledge managers

■ Business analysts

■ Information/data scientists

■ Chief data officers

■ IT legal professionals

We have been tracking this activity for several years, and believe that the trend is about to becomemuch more prevalent as the success of early adopters of such roles becomes more widelypublicized. One of the most important issues here is that we are not talking about pure technologyadoption, but about human beings — who must essentially crosstrain from their area of domainexpertise, to add a set of IM skills to what they already know. It's not going to happen quickly. Thescarcity of information leader talent will require executive leaders to develop it as much as hire it.Also, because creating EIM-specific roles requires that people change their job responsibilities — orthat personnel are actually hired to do these jobs — strong business cases are always going to berequired.

Gartner, Inc. | G00249318 Page 19 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Recommendations:

■ Create training programs for business-based information managers and data stewards (domainexperts who need to learn new IM skills) that will focus on the new IT functions.

■ Hire more individuals with IM and organizational skills (for example, information architects, datastewards and data scientists), who can serve as crosscompany resources. All enterprisesshould do this.

■ Finally, advising BU leaders about how to task and manage their new IM professionals, issomething that CIOs will need to do.

Information Stewardship Applications

Analysis by Andrew White and Ted Friedman

Governance of data is a people- and process-oriented discipline that forms a key part of any EIMprogram, including MDM. The decision rights and authority model that forms governance has to beenforced and operationalized. This means that this technology is needed to help formalize andcombine the day-to-day stewardship processes of (business) data stewards into part of their normalwork routines. The formation of this specific toolset needs to be closely targeted at the stewardshipof primarily structured data. This packaged technology is slowing progressing. Initial vendors,including Collibra, Kalido, Global Data Excellence and SAP, have launched products in the lastcouple of years, and many more are formulating plans for an offering, or working on multipleofferings. The continued high growth and interest in MDM programs is driving much of the interestin this technology, because MDM gives these solutions recent and specific context, which makesthem applicable and meaningful to users. However, other initiatives, such as data qualityimprovement and broadening information governance goals, are also driving demand. Individualtools, like data quality tools that output the data-level analytics monitored and enforced by theseapplications, have existed for years. They are mature and can help support a manual stewardshipproject. These new applications also promise to be more far-reaching, so that they can support amuch broader and more automated stewardship process.

With the new growth and interest in big/social data, these tools will provide a basis for organizationsto steward data from outside their organizations, which will become important as more and morebig data challenges arise. The emergence of growing numbers of integrated tools for informationstewardship will soon challenge established MDM implementation efforts, because each new MDMprogram brings native governance requirements that greatly vary. In the short term, organizationsare getting by using today's MDM offerings, data quality tools, security/privacy and archiving tools,and adding manual processes. In about five to 10 years' time, mature IT organizations will formallyintegrate BPM tools with MDM tools, to express information governance routines and monitorbusiness activity and corporate performance management. Some vendors offer part of thistechnology, but, often, not as an integrated environment for use by the master data steward, or tobe deployed across all other master data stores. Additionally, these and other applications willevolve to steward different data in support of other initiatives. However, not all these applicationsare evolving to steward other data in addition to master data.

Page 20 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

The governance of information is a core component of any EIM discipline. MDM — a critical EIMprogram — cannot be sustained without an operational master data stewardship role and function.At worst, the lack of effective governance will lead to the failure of EIM initiatives; at best, it willresult in undesirable attributes. The business case for MDM, for example, won't be realized. Asuccessful stewardship routine will lead to sustainable and persistent benefits from programs likeMDM, such as increased revenue, lower IT and business costs, reduced cycle times (for example, innew product introductions) and increased business agility.

Recommendations:

■ Recognize the general lack of maturity in technology offerings related to governing data acrossmultiple hubs and application data stores. Today, most solutions are best-suited for the single-hub level.

■ Many MDM solutions that are focused on one of a few data domains, have some governancecapability. For the next two to three years, most MDM implementations will focus on tools thatmanually define and manage governance, with limited help from technology vendors acrossMDM systems or the enterprise. Work with your MDM technology providers to help themunderstand what MDM tools must be made operational. If you need to steward other dataoutside an MDM program, be cautious, because the lack of a unifying driver like MDM couldlead to fewer vendor options.

Information Valuation/Infonomics

Analysis by Debra Logan, Doug Laney and Andrew White

Information valuation is the process by which relative value or risk is assigned to a given informationasset or set of information assets. The term "information asset" encompasses any digital or physicalobject, or corporate data. Also, an information asset is any information artifact that can producevalue for a company. One of the main characteristics of an asset is that it can be viewed as a risk,depending on the circumstances.

The question of the value of information has been around for a long time. The phrase "knowledge ispower" sums it up well, because the value of keeping and sharing information has been recognizedsince the beginning of oral and written communication. However, only since the dawn of theinformation age, or more precisely the computer age, has the question of "information as an assetto be exploited on a massive scale" become an issue. Gartner clients have struggled with thesequestions for at least a decade.

A more formal approach to information valuation is beginning to take hold in leading-edgeorganizations. Issues such as how much to invest in IT systems, information security and cloudcomputing, all depend on the underlying question of how much the information is worth to thebusiness.

In "Introducing Infonomics: Valuing Information as a Corporate Asset," we introduce a more formalapproach to valuing information. When considering how to put information to work for your

Gartner, Inc. | G00249318 Page 21 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

organization, it is important to not only think about information being like an asset, but also toactually value and treat it as if it were an asset. Although information meets accounting standardscriteria for an intangible asset, it is not found on public companies' balance sheets. If you're notmeasuring the actual and potential value generated from information assets, then you're in a poorposition to close that gap. You will not be in a good position to reap any of the other potentialbenefits that come from quantifying the value of information either. Any number of establishedmethods for valuing intangibles (for example, market approach, cost approach or income approach)can be used, or organizations can select valuation methods that map to nonfinancial keyperformance indicators.

Business leaders are often slow to understand that they are the ones who must establish the valueof information and how it relates to their businesses. IM is seen as the IT department'sresponsibility, so the formal methods for information valuation will be slow to take hold in themarketplace, and remain mostly subjective.

Recommendations:

■ Understand the characteristics of information, which will help you value it (see "IntroducingInfonomics: Valuing Information as a Corporate Asset"). Note that information valuation is not atask that many people have experience of. A combination of business and IT professionals,along with others who have a financial background, may be needed to make these decisions.

■ Establish the value of information through a simple high-level categorization scheme:

■ Nuisance information adds no value and may contribute to a glut of data. Its applicability islimited and may only be deemed valuable by one or just a few people in the enterprise.

■ Compliance information, which enterprises are required to keep by law, is more like aninsurance policy. Invest in it, just in case.

■ Operational information is related to faster, less costly and more effective operations. It canprovide competitive advantage but, more often, is part of the cost of doing business. It isrequired for keeping up with competitors, not to outpace them.

■ Growth-related information comes from R&D and is associated with innovation, processimprovement or mission enhancement.

■ For the next level of analysis, use business-based frameworks. For each business function orrole, and for each process, work through the types of document and data that are required toachieve business goals. Many individual projects exist that manage different types ofinformation and, although IM disciplines share common goals of information consistency andusability, they lack options for discussing concepts and coordinating efforts. Enterpriseinformation architecture guides such activities to ensure organizations gain business value fromtheir enterprise information (see "Gartner Defines Enterprise Information Architecture").

■ Communicate to your business the benefits of treating information as an asset by usingbusiness constituencies. Business benefits include:

■ Fewer errors due to data inconsistencies

Page 22 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ Greater efficiency in sharing resources across the enterprise

■ An improved ability to compare and aggregate information, such as sales, balances,inventories and prices

■ A greater ability to relate different behaviors, activities and relationships, for example, 360-degree views of customers or citizens

■ Economies of scale

Recommended ReadingSome documents may not be available as part of your current Gartner subscription.

"Information Management in the 21st Century"

"Information Management in the 21st Century Is About All Kinds of Semantics"

"The Eight Common Sources of Metadata"

"Does the 21st-Century 'Big Data' Warehouse Mean the End of the Enterprise Data Warehouse?"

"The Importance of 'Big Data': A Definition"

"The Multidiscipline Aspect of the Information Capabilities Framework"

"Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending Through 2016"

"Defining the Scope of Metadata Management for the Information Capabilities Framework"

"The Logical Data Warehouse Will Be a Key Scenario for Using Data Federation"

"Value From Information; CIO Desk Reference Chapter 15, Updated Q3 2012"

"Introducing Infonomics: Valuing Information as a Corporate Asset"

Gartner, Inc. | G00249318 Page 23 of 25

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Acronym Key and Glossary Terms

BI business intelligence

BU business unit

BPM business process management

DW data warehouse

EIM enterprise information management

HDFS Hadoop Distributed File System

IM information management

IMDBMS in-memory DBMS

MDM master data management

OT operational technology

QoS quality of service

Evidence

1 This document is based on Gartner inquiry trends in BI/analytics and IM/big data.

Page 24 of 25 Gartner, Inc. | G00249318

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

GARTNER HEADQUARTERS

Corporate Headquarters56 Top Gallant RoadStamford, CT 06902-7700USA+1 203 964 0096

Regional HeadquartersAUSTRALIABRAZILJAPANUNITED KINGDOM

For a complete list of worldwide locations,visit http://www.gartner.com/technology/about.jsp

© 2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. Thispublication may not be reproduced or distributed in any form without Gartner’s prior written permission. The information contained in thispublication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness oradequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publicationconsists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions expressedherein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does notprovide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and itsshareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board ofDirectors may include senior managers of these firms or funds. Gartner research is produced independently by its research organizationwithout input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartnerresearch, see “Guiding Principles on Independence and Objectivity” on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp.

Gartner, Inc. | G00249318 Page 25 of 25