ibm tivoli performance analyzer – free at last€¦ · netcool impact – escalating events based...
TRANSCRIPT
3) Derive Metrics from existingattributes
Also available in ITPA are derived
No.9 2009
It’s not often you get something forfree however with the advent of IBMTivoli Monitoring 6.2.2 (ITM) IBM TivoliPerformance Analyzer (ITPA) has beenbundled into the base ITM product. This means that with no additional cost it is now possible to offer yourusers some simple capacity planningand add a predictive capability to yourexisting IBM Tivoli Monitoringdeployment.
To help you take advantage of this we
have devised a 10 day service offering to
upgrade your existing ITM deployment,
create a suitable data warehouse
configuration and then implement the
additional features of IBM Tivoli
Performance Analyzer. In addition we will
deploy another free product – Tivoli
Common Reporting – to allow you to
report on the new data that will result
from the addition of ITPA.
Why add Tivoli PerformanceAnalyzer into yourDeployment?You might be asking what benefits you
would get from adding ITPA into your
ITM deployment? We think there are
many reasons but the main four are
listed below:
1) Forecast Resource Trends
ITPA uses the Least Squares Regression
method to calculate the trended value of
a monitored attribute. ITPA calculates
the approximate value of a monitored
attribute for a given forecast period (by
default 7, 30, or 90 days but these can be
supplemented or modified as you wish)
and displays the future values in custom
workspaces. The trends will also be
shown as overlays on historical
performance graphs.
2) Alerts based on predictions
Using Performance Analyzer you can
create situations (or alerts) that will warn
of an impending problem in a defined
period of time. For example the situation
Disk_TimeToCriticalThreshold_1W
predicts if a monitored disk will reach the
defined critical limit within the next seven
days. This alert will only be issued if the
prediction strength is set to 3. Strength is
a derived discrete value enabling
operators to quickly evaluate the strength
of the trend predicted by the algorithm
ensuring only alerts that have been
derived from a number of samples will
be sent.
IBM Tivoli Performance Analyzer – Free at Last Simon Barnes
Using the Tivoli Toolset to Manage VMware 3
The benefits of a global scheduling solution 6
Free Technical Account Managers 10
Self-Service Enterprise Monitoring 11
Netuitive Self-Learning Technology 13
User Defined ITNM Stitchers 16
Orb Flex - The future for Contract Tivoli Specialists 19
ForestSafe – Password Provisioning and Remote Access 20
News in a Minute 27
Netcool Impact – Escalating Events based on Business Criticality 28
Example of Predictions and Trending Chart
continued on p2
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 1
Message Broker No.9 2009
2
IBM Tivoli Performance Analyzer – Free at Last (continued)
arithmetic values. The calculations use
expressions and result in a single, more
meaningful, new attribute. The
expression may include any arithmetic
function. An example of an arithmetic
expression is the workload parameter
used in Performance Analyzer. This is
derived from the following function:
Workload = (( CPUUtilization *
6) / 10) + (( CommittedMemory *
40) / CommittedLimit )
The calculated arithmetic result is stored
in a single attribute, shown in the
workspace and then reported along with
other useful derived parameters such as
memory differential and CPU
consumption (the CPU that a system is
attempting to actually use) to show a
concise view of a systems performance.
4) Out of the Box Reporting
The 4th benefit you will gain from
implementing ITPA is the reporting. IBM
has instigated an initiative across the
Tivoli portfolio called Tivoli Common
Reporting (TCR). Its aim is to provide
customers with a reporting solution
common to all Tivoli products that can be
used by IBM to deliver "canned" product-
oriented reports, whilst at the same time
providing customers with the tools to
author and publish business specific
Sample Report
reports from data outside of the
IBM products.
Pre-canned reports are available to view
storage, performance and future capacity
data from all servers and applications.
For example the above report shows
Physical Data Storage for 20 days and the
predictions for how long at the current
trend it will be before the storage reaches
a critical capacity threshold.
Like to know more?If you would like to know more about Orb
Data’s 10 day Tivoli Performance Analyzer
service offering in which we will upgrade
your ITM infrastructure, implement Tivoli
Performance Analyzer and set up Tivoli
Common Reporting to use the ITPA base
reports for a single HUB TEMS
environment then call me on +44 (0) 1628
550450 or send an email to
About the author: Simon Barnes is aDirector and co-founder of Orb Data.Simon is well respected in the Tivolicommunity for his technical capabilities,having worked with Tivoli products for inexcess of 12 years. Email: simon. [email protected]
The review consists of 3 days ofon-site consultancy and 2 daysoff-site.
During the time onsite our expert willwork closely with your team to gather thefollowing information:
• Architectural changes between the IBMTivoli Enterprise Console and NetcoolOMNIbus
- What new hardware be needed?
- Is Hot-Standby required?
•Visual Changes
- Will the OMNIbus consoles matchwhat is currently used?
- Are there enhancements that can beachieved?
•Current event-sources and changesneeded
- What are the current event-sources,and how can these methods can bemigrated to the new architecture?
- If State Correlation is used, this willbe analysed to see whether it can bereplaced with OMNIbus probe rules
•Rule migration changes
- The rulebase will be analysed anddocumented
- From this data, a table will be createdshowing:
• Each rule and how it can be mappedagainst standard OMNIbusfunctionality
• Rules that are no longer need
• Rules that may need Netcool Impactto function as before
•Customisation and Reporting
- Are there existing customisation andreports that will need to emulated?
When the data is gathered, the consultantwill then produce a report documentingfindings and recommendations. We willalso document the length of time amigration will take and offer a technicalproject plan detailing how this can beachieved in the timescales we have given.
The documentation phase will take anextra 2 days, off-site, resulting in a 5 daytotal engagement. The total cost for thisreview is £3500 or €4000.
To discuss requirements further, andregister your interest in this offering,please email [email protected] or phone +44 (0)1628 550450
If you are thinking of migrating to OMNIbus why not take advantage of our fixed price review?
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 2
3
Message Broker No.9 2009
Using the Tivoli Toolset to Manage VMwareSimon Barnes
Consolidation brings newchallenges alongside thebenefitsWhen times get tough it is not surprising
that companies identify potential cost
savings in their IT budget, and nowhere is
this more evident that in the rush to
virtualisation. Gartner estimate that the
total number of virtual machines deployed
worldwide is expected to increase from
540,000 at the end of 2006 to more than 4
million by 2009, and according to a recent
survey the top three motivators for
customers looking to virtualise related to
cost reduction.
However, early adopters of the
virtualisation technology are beginning to
realise that there are new challenges in
managing virtualised environments that
simply were not there before. For
example, consider an application running
on a virtual server that experiences a
performance problem. Whereas before
the problem might have been fairly simple
to trace with conventional monitoring
tools, now the problem might exist in the
application, the guest operating system,
the virtual resources, the hypervisor or in
the physical resources such as CPU,
memory, disk or network. The problem is
not a “virtual performance problem”, it is
a very real performance problem
manifested in a complex consolidated,
virtual environment.
Consolidating Resources inthe Data CentreWhether you are some way down the path
to virtualisation, or are just about to start,
you will need to answer some
fundamental questions:
1. What are the critical components of myservices?
2. What services are the most “missioncritical” to my company?
3. How is each resource contracting orgrowing?
4. What resources are “under pressure”today?
5. Once consolidated, how do I avoid
unexpected capacity requirements andkeep optimising?
Only when all these questions are
answered can you be sure that
implementing a virtualised environment
will bring the cost savings required, and
not be an unexpected burden. As Gartner
rightly point out “Virtualization without
good management is more dangerous
than not using virtualization in the
first place.”
Discovering the Data CentreThe first stage of the cycle is to identify
the critical virtual components of the
service, and be able to view the resources
of both a physical and a virtualised
environment in a single pane alongside
their configurations, their relationships
and changes that have occurred over time.
IBM Tivoli
Application
Dependency
Discovery Manager
(TADDM) provides
exactly this
visibility. It
discovers the
interdependencies
between
applications,
computer systems
and networking
devices, using
agent-less,
credential-free discovery, and automated
application maps.
In doing so it allows the IT team to
understand what they have and how the
business services relate to the physical
and virtual infrastructure.
Visualise the ConsolidatedEnvironmentThe second stage is to provide an
Enterprise Portal to visualise information
provided by the system monitors in
virtualised environments. This will allow
for the streamlining of the identification of
a problem’s “root cause” and reduce the
mean time to recovery.
IBM Tivoli Monitoring (ITM) for Virtual
Servers helps prioritise decisions by
visualising the actual physical and virtual
server utilisations alongside each other. In
ITM for Virtual Servers Sample Screen
continued on p4
VMware TADDM Screen
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 3
Message Broker No.9 2009
4
over-utilised resources. Under-utilisation
of server resources raises costs due to
wasted resources and payment of
resources that are not needed, whereas
over-utilisation of server resources can
result in slow or erratic response times,
lost productivity and ultimately
application failures and dissatisfied
customers. The aim is to get the
balance right.
IBM Tivoli Performance Analyzer (ITPA)
provides the ability to match server
capacity against changing demand and
combined with IBM’s Tivoli Monitoring
Data Warehouse helps you identify trends,
predict system behaviour and make
informed management decisions to guide
future growth.
Out of the Box ReportingIBM has instigated an initiative across the
entire Tivoli portfolio called Tivoli Common
Reporting (TCR). Its aim is to provide
customers with a reporting solution
common to all Tivoli products that can be
used by IBM to deliver "canned" product-
oriented reports, whilst at the same
time providing customers with the tools
to author and publish business specific
reports from data outside of the
IBM products.
The reports are written using a simple
Eclipse-based open source reporting tool
called BIRT (Business Intelligence and
Reporting Tools). Complimenting the BIRT
component is a data store, which is used
for storing report designs and supporting
resources (images, scripts etc). A web-
based interface is used to both, manage
the items residing in the data store, and to
run and view reports.
Each report published into TCR can be
viewed as either HTML or Adobe PDF files.
In the screenshot below, you can see an
example of a standard ITM 6.2 report to
view disk utilisation for a single system
displayed as HTML. The report shows
patterns of CPU and/or memory utilisation
over a period of time for the selected
server to identify patterns of utilisation
(peak vs. non peak). It appears in this
example, that Saturday mornings might
need further analysis.
Capturing Best Practices withAutomationSince the inception of computers they
have been recognised at being good at
performing repetitive tasks in a faster and
more accurate way than by doing them by
hand. These automations really fall into 2
addition, with its built-in warehouse, it
allows real-time and historical data to
be used to assist in separating
intermittent problems from reoccurring
problems from peak workloads.
ITM for Virtual Servers offers 2 agents
specifically for VMware; the VMware ESX
Agent (which uses SSH and SNMP to
communicate directly with an ESX server),
or the VMware VI Agent (which connects
via SSL to the VMware Virtual Centre).
Although the ESX agent works perfectly
well, I would suggest that if the Virtual
Centre is available, then the VI Agent is
the one to use. It has a richer set of data
available and is simpler to install
and maintain.
ITM for Virtual Servers will automatically
link the virtual machines detected on the
ESX server to any agents detected locally
on the systems. This means that from any
ESX server discovered, you can drill down
to utilise ITM’s detailed operating system
and application statistics not available
through the ESX server itself.
ForecastGrowth in theConsolidatedEnvironmentMatching IT supply
to demand can be a
real challenge. A
discrepancy
between the
capacities needed
and demanded
results in either
under-utilised or Virtual Servers Performance Analyzer Screen
TCR Out-of-the-box VMware Report
Linking from theESX server todetailed local data
Using the Tivoli Toolset to Manage VMware (continued)
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 4
Message Broker No.9 2009
5
categories:
• Systems Management Automation• Provisioning Automation
Automation of Systems Management
processes allows for either manual or
automated processes to be executed in
response to an individual situation. For
example when an action is executed by
ITM in response to a problem, the actions
can provide immediate return on
investment and faster time to value.
However, perhaps more important in the
context of VMware is Provisioning
Automation. Depending on the
architecture, an application may either
scale vertically (scale up) or scale
horizontally (scale out). Both
environments must be supported to truly
optimise a virtualised environment.
Vertically scaling applications scale easily
with the addition of more “horsepower”,
such as CPU and memory. An example of a
scale up environment is a database or an
application written to exploit multiple CPU
environments. Today, many environments
provide for the quick and easy addition of
CPU including in a VMware context,
VMotion.
Horizontally scaling applications scale
with the addition of another instance of
the server, or a server in a cluster, and are
extremely labour-intensive and prone to
error. Examples of scale out environments
are distributed sites, WebSphere
Clustering and Microsoft Clustering.
IBM Tivoli Provisioning Manager (TPM)
provides an end-to-end provisioning
capability with pre- supplied workflows
that will provision the virtualised
resources in both vertically and
horizontally
scaling
applications.
For example,
TPM can use a
template to
create the
virtual
machines with
the appropriate
networking,
storage and
security, and
support
existing
procedures and
scripts to install virtual operating
environments, middleware and application
software. In the example below, TPM uses
a virtual server template as a base to
build a new virtual machine. As of version
7.1, TPM also has the ability to build not
only the VM clients but also the physical
VMWare host (ESX) server itself.
TPMfOSD 7.1 now allows you to deploy an
unattended install image of ESX 3.5. If you
run a large ESX server farm and regularly
need to build new ESX hosts, this may be
an attractive option.
Assigning Ownership toEnsure Accountability The last part of the cycle is to implement
resource usage accountability. This phase
will ensure the original perceived
advantages of Virtualisation can be
achieved, in order that IT is more cost
effective for hardware, software, energy,
staff, and floor space, and utilises its
existing resources better.
To achieve this you will need to know who
Tivoli Usage and Accounting Manager
Virtual Machine Creation Using Workflows
is consuming which IT resources, both
physical and virtual, what is the cost of
those resources (including those that are
shared), and how are the resources
allocated for chargeback, return-on-
investment decisions, costing analysis and
reporting or billing.
IBM Tivoli Usage and Accounting Manager
provides the ability to apportion usage by
account, department or organisation. With
it, you can understand your costs and
track, allocate and invoice based on actual
resource use by department, user and
many additional criteria.
SummaryIn summary to ensure that the
implementation of VMware becomes the
cost saving exercise that you wanted it to
be, follow these four basic rules:
1. Understand the relationships anddependencies among physical andvirtual infrastructure components
2. Monitor application, system andtransaction performance across thevirtual environment
3. Utilise resources more efficientlythrough the ability to automate criticaloperations, such as provisioning
4. Accurately assess usage in virtualisedenvironments to better determine ITresource and expense justification
If you would like to understand furtherhow Orb Data can help you to manage theVirtualisation cycle, be it using VMwareor any other manufacturer, then pleasecall me on +44 (0) 1628 550450 or [email protected].
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 5
Message Broker No.9 2009
6
The benefits of a global scheduling solutionPete Meechan
There are a number of benefits providedby using a global scheduling solutionused to manage automated batchprocesses within your organisation. Thebenefits include
• Centralised control and management of
batch across the organisation
• Reduced administration due to more
efficient centralised control
• Single global view of workload
dependency across multiple platforms
and applications
• Improved use of existing resources
resulting from better job coordination
across different platforms and/or
applications
• Lower cost of ownership when working
with a single scheduling solution instead
of multiple independent schedulers
• Common scheduling functionality
across different operating systems
and applications
Some of the typical questions raised when
discussing batch processes and
scheduling issues are:
• What is a batch process?
• Why would they need to be scheduled?
• Why can't we use the Windows Task
Scheduler or cron to schedule batch?
• What benefit is there is using a global
scheduling solution versus independent
schedulers?
This article looks at using the IBM global
scheduling product, Tivoli Workload
Scheduler (TWS) to answer these
questions, whilst looking at some of
the batch scheduling challenges in
more detail.
IntroductionBatch processes can have different
implementations across different
operating systems, but essentially they
consist of a process that can be executed
in a non-interactive mode. Users familiar
with the Windows desktop computing
environment are working in an interactive
mode, generally using a graphical
interface to interact with the operating
system. Whilst this works very well for
interactive functions, where repetitive
processing is required it is more efficient
to execute those processes in a non-
interactive mode commonly referred to as
background or batch processing.
Many examples could be given of typical
batch workloads, but some of the more
common batch applications are:
• Payment files loaded by customers for
internet banking – these are normally
stored on the receiving bank computer
systems and processed collectively at
specific times during the working day or
overnight
• Customer orders that have been placed
throughout the working day - processed
overnight for dispatch the following day
• Stock updates from branches of shops
used for reordering purposes -
processed collectively for delivery to the
individual stores on a daily or weekly
basis
• Customer account balancing at the end
of the financial day - calculating interest
to be paid or interest charged
Scheduling environmentMost operating systems and some
applications provide a basic scheduling
capability. The Windows systems provide
the Task Scheduler and the UNIX/Linux
systems provide the cron scheduler.
Whilst these schedulers will normally
meet basic scheduling capability, they can
quickly become restrictive particularly
when used in environments where cross-
platform or cross-application scheduling
is required.
It is not uncommon for applications today
to consist of different component parts,
possibly executing on different physical or
virtual systems, possibly executing upon
different operating systems, possibly
depending upon something other than
just starting at a specific time. These are
some of the issues that become difficult to
address with the basic schedulers
provided with the operating system.
VisibilityWhen using the individual schedulers
providing by the operating system, there
is a lack of visibility of the overall batch
processing (i.e. a global enterprise view).
To verify if each batch process has
executed successfully, it is necessary to
check each scheduler individually under
the Windows Task Scheduler or cron on
each individual system. Otherwise, each
batch process would need to contain logic
to report to a central location – this extra
effort takes important and costly
development resource away from ensuring
that the batch process works successfully
and instead concentrates upon developing
a central reporting function.
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 6
Message Broker No.9 2009
7
DependenciesUsing the basic scheduling services of the
operating system is generally satisfactory
if the batch process only has a
dependency upon when the process is
started (e.g. 18h00). If other
dependencies are required such as:
• The batch process B on system B cannotstart until the batch process A on systemA has completed successfully
• The batch process C on system Z cannotstart execution until the transfer of file/opt/customer/N12678/incoming/ordershas completed successfully
• If batch process D terminatessuccessfully, execute batch process E,otherwise execute batch process R toperform a recovery
• Batch process B cannot start before18h00, but only if batch process A hasalready run and completed successfully.If batch process A has not completedbefore 18h00, hold batch process B untilbatch process A has completedsuccessfully
• Batch process G can only run on the lastworking day of the month and must runafter the daily batch process F hascompleted successfully
Looking at the typical dependencies listed
above, it becomes quite difficult to
schedule the different batch processes
without building the dependencies into
the batch processes themselves. As
indicated earlier, building dependencies
into the batch processes takes valuable
development resource away from
concentrating upon the batch process
itself and instead concentrates that
resource on developing control
mechanisms for batch dependencies.
Manageability and ScalabilityManaging batch processes running under
the local operating system or application
schedulers can quickly become
unmanageable when performed manually.
It may be acceptable to manage 5 to 10
batch processes on a server using the
local scheduler function, whereas
managing 5 to 10 batch process on 5 to 10
(or 100!) servers very quickly becomes a
management nightmare.
Change is probably one of the most
consistent aspects of IT – things will
change and batch processes will not be
an exception. Whilst changing 5 to 10
batch processes on a single server may
be manageable, changing 5 to 10 batch
processes across 5 to 10 (or 100!) servers
is both labour intensive and prone
to error.
Especially in the current economic climate,
companies are looking to reduce costs
through more efficient and effective use of
existing resources. One area being
considered is business processes with a
view to making them more automated and
less labour intensive. As more and more
business processes become automated,
the resulting automated process is ideal
for execution as a batch (non-interactive)
process. This requires a scalable solution
for scheduling the increased number of
batch processes.
DynamicThe increase in the number of batch
processes requires additional
management capability for batch
scheduling and can also increase the
complexity of the scheduling
requirements. Traditional scheduling
requirements tend to be quite static
covering priority, time, resource or file
based dependencies, whereas automated
business processes require a more
dynamic capability. Consequently, there is
a need to respond to events occurring
throughout an automated process in a
more dynamic and timely manner.
This capability is referred to as event
based or event driven scheduling.
Scheduling softwareTivoli Workload Scheduler can help
address the issues identified in the
previous section. TWS achieves
this through
• Visibility - Tivoli Dynamic Workload
Console (TDWC) is a central web based
console that provides visibility of the
enterprise wide scheduling
requirements and current status. This
removes the requirement to visit
individual systems to determine thecontinued on p8
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 7
main controller of the batch scheduling
process, responsible for generating the
schedule plan and updating the TWS
agents with the scheduling information.
The TWS agents (Fault Tolerant Agents or
FTA) manage the scheduling of batch
processes on each system, periodically
updating the MDM with the status of the
batch jobs (e.g. job A has completed
successfully or job B has failed).
A web based console interface (Tivoli
Dynamic Workload Console or TDWC)
provides a graphical interface for
management of the TWS environment.
Full developer and operational functions
are supported, including definition of
schedules or jobs and monitoring
the status of currently scheduled
batch processes.
In larger environments, TWS can use
Domain Managers (DM) to manage
subsets of the batch processing
infrastructure (see diagram on next page).
The Domain Managers provide a large
degree of independence from the MDM,
periodically reporting the status of batch
jobs executing within their domain.
This infrastructure allows TWS to scale
when necessary for very large batch
processing environments. TWS is
supported on most versions of Windows,
AIX, Solaris, HP-UX and Linux (including
z/Linux).
Message Broker No.9 2009
8
The benefits of a global scheduling solution (continued)
status of the batch processing.
• Dependencies - Tivoli Workload
Scheduler can schedule batch based
upon any combination of the following
• Job
• File
• Date/time
• Resource
• Prompts
• Management and scalability -
centralised management of the
enterprise wide scheduling
requirements is performed from the web
based TDWC console. All that is
required is a current browser (e.g.
Internet Explorer or Firefox). Scalability
is provided for small enterprises
consisting of a few servers through to
large enterprises using many hundreds
of servers. The scheduling capability
can be increased (or decreased) as
required by adding (or removing)
intermediate domain managers within
the scheduling infrastructure. This
allows you to add additional scheduling
capacity as your scheduling
requirements grow.
• Dynamic - as more business related
processes become automated they are
ideally suited for background (non-
interactive) processing. However, these
processes generally require a more
dynamic approach to scheduling as they
do not always occur at a pre-defined
time or frequency. Tivoli Workload
Scheduler uses a rule-based event
engine to support these processes,
reacting to changes in process status in
real time, allowing batch processes to
be submitted based upon the presence
(or absence) of predefined criteria.
Interfaces with a number other external
products allows TWS to submit and
track batch processes on their behalf,
for example based upon the availability
of a specific application.
Tivoli Workload SchedulerArchitectureTivoli Workload Scheduler can be
implemented in small, medium or large
environments and uses a hierarchical
structure to scale to the environment as
required. TWS can be implemented in
most small to medium environments using
a simple single master environment,
similar to the one shown above.
The Master Domain Manager (MDM) is the
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 8
Message Broker No.9 2009
9
Tivoli Workload Scheduler 8.5 TWS 8.5 distributed builds upon the
functionality provided by the earlier TWS
8.3 and TWS 8.4 releases, encompassing
the same functionality provided in those
releases. Architecturally built upon the
same infrastructure as TWS 8.3/8.4 using
an embedded WebSphere instance and
DB2 (or Oracle), new functionality
provided with TWS 8.5 includes:
• Installation improvements
• Variable tables
• Workload Service Assurance
• IBM Support Assistant integration
• TDWC full capability console
• Support for DB2 V9.5
Installation Improvements The installation process for TWS 8.5 has
been simplified using a common installer
with the same look and feel irrespective of
the chosen operating system platform.
The installer can install new TWS
components or upgrade existing
components. It provides a graphical
interface with documented installation
options and architectures.
The Tivoli Workload Scheduler (TWS) and
Tivoli Dynamic Workload Console (TDWC)
can be installed using the same
embedded WebSphere instance.
Variable TablesTWS 8.5 introduces support for variable
tables. Variables tables allow substitution
of different parameter values relative to
the TWS agent (FTA) executing the batch
job or schedule. This support assists the
creation of job and schedule templates,
where the same basic definition can be
used by different jobs, each substituting
parameter values suitable for each job.
For example, during development and
testing of batch jobs, there are usually
slight differences in the batch job
definitions such as the path structure. In
development the path used could be
D:\DEV\scripts\V1\move_files.bat,
whereas in test the path used could be
D:\TEST\scripts\V1\move_files.bat. Using
variable tables, TWS can automatically
substitute the correct PATH value based
on the TWS agent that executes the job.
The same technique can be used to
specify different run cycles or schedules
(e.g. DAILY in production, but WEEKLY in
test) as well as for groups of jobs (job
streams) and jobs.
Workload Service AssuranceWorkload Service Assurance (WSA) helps
to proactively identify potential hold ups
or overruns within the schedule. Selected
jobs that are identified as critical to the
business can be flagged to TWS within the
schedule. Using historical data based
upon the elapsed time of the critical jobs
and the pre-requisite jobs, TWS
continuously monitors the expected
completion of the critical jobs. If the
calculation indicates that the critical job
is unlikely to complete on schedule,
TWS can alert you in advance of the
potential problem.
TWS will ensure any pre-requisite jobs
required to complete by the critical job are
started as quickly as possible and given
preference over other jobs not on the
critical-path in an attempt to expedite the
batch as quickly as possible.
IBM Support AssistantTWS 8.5 includes support for the IBM
Support Assistant (ISA) to assist with
diagnosing problems related to TWS. ISA
can analyse log files and suggest possible
solutions to the problems discovered and
if this does not resolve the issue, collect
the necessary log and configuration files
required by IBM support to further
investigate the problem.
TDWC Full Capability ConsoleTivoli Dynamic Workload Console (TDWC)
8.5 provides a single interface for all
management and monitoring
requirements for TWS. The TDWC includes
continued on p10
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 9
Message Broker No.9 2009
10
a Design function for adding, maintaining
and deleting database objects related to
scheduling. This includes all of the
following objects:
• Workstations
• Workstation class
• Job streams
• Jobs
• Calendars
• Resources
• Prompts
• Files
• Variable tables
Summary of benefitsUsing Tivoli Workload Scheduler to
manage the execution of your batch
processes brings a number of benefits to
the business and IT department, including
• Efficient management of the batch
scheduling requirements being able to
quickly address new or modified
requests for batch processing
• Reduced administration due to
automated execution and monitoring
of the batch processes across
the enterprise
• Better use of existing resources as thescheduler can automatically delay orpromote batch processes as resourcesbecome saturated or available
• Increased visibility of batch processesimproving the ability to meet servicelevels by addressing failures in a moretimely manner
• Automated scheduling provides a more consistent and reliable execution environment
• Integration with 3rd party applications(e.g. SAP, PeopleSoft) and other Tivoliproducts (e.g. ITM, TEC) results in a fully automated centralised scheduling solution
About the author: Pete Meechan is aPrincipal Consultant and has been withOrb Data since 2000. Pete has gainedconsiderable knowledge and experienceacross the range of managementfunctions provided by Tivoli software,particularly with Tivoli WorkloadScheduler, Tivoli Monitoring and TivoliProvisioning Manager. Pete has a host ofTivoli certifications and is also an IBMCertified Instructor. Email: [email protected]
Through many years of workingclosely with our customers, we’vecome to realise that many TivoliAdministrators experience deepfrustration when they require timely,expert advice to resolve technicalsupport issues for Tivoli software.
Most customers need proactive,
technical account management, rather
than the reactive support that is often
provided through standard support and
maintenance arrangements.
In recognition of this, Orb Data assigns
a Technical Account Manager to each of
its customers. The aim of this role is to
establish regular, monthly contact with
the customer’s technical counterpart, in
order to augment the support provided
by IBM. The role provides ad hoc support
and advice on problems the customer
experiences, recommendation of fix pack
application, help with PMR resolution,
advice on technical enablement, and
provision of technical information
concerning Tivoli software products. This
service is offered free of charge and is
provided on an “as able” basis.
We’d like to extend the scope of this
service, and offer it at no charge to any
organisation that uses the Tivoli software
product as a customer of IBM. The key
benefits to customers are:
• Quicker time to ROI
• More time spent developing value for
their business, rather than being
reactive
• Shorter time to enablement
• Confidence in the advice provided
from subject matter experts
In order to take advantage of this
service, all we need is an initial
technical meeting in order for our
Technical Account Manager to
understand which Tivoli software
products you have deployed, and how
your solution is architected.
If your organisation would like tobenefit from a Technical AccountManager form Orb Data, or you wouldlike to find out more, please call oremail Nigel Brown. [email protected]. Tel. 01628 550450
Free Tivoli Technical Account Managers
The benefits of a global scheduling solution (continued)
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 10
Message Broker No.9 2009
11
The traditional approach to EnterpriseSystems Management involves a singleteam responsible for all aspects of ESMdelivery. This includes mastering theESM software, establishing andmaintaining service capabilities,defining suitable standards, processingincoming requests and managingexceptions. All too often, this results inan unnecessary bottleneck. Manyorganisations are recognising the needfor a self-service approach to deliveringESM to the business.
This article discusses the key challenges
facing any self-service solution and how
the Self-Service Portal for Enterprise
Monitoring from Orb Data has been
designed to meet them.
Self-Service ApproachOne clear advantage of the self-service
approach is the redistribution of roles to
those best suited to perform them.
It is system or application owners
themselves who are often best placed to
decide on the detailed levels of
monitoring for their systems. If they get
called out when something breaches a
threshold, they want to know it’s going to
be relevant. These are the ultimate end-
users of the self-service software. They
aren’t interested in which Vendor’s agent
is monitoring their disks, but they do want
control over what it is looking at.
Of course, the responsibility cannot be
left just to these end-users. Build teams
can determine a baseline for OS and
hardware monitoring, and subject
experts can define standards for any
deployed applications.
Devolving responsibilities in this way
• Improves the relevance and accuracy of
ESM configuration
•Increases end-user awareness and
involvement in ESM
•Releases the Systems Management team
to focus on service quality
Presentation LayerIf a self-service solution is implemented
well then many other significant benefits
can be realised.
ESM frontends are traditionally too
heavyweight and esoteric for exposure to
typical end-users. A self-service
application inserts a presentation layer
between the end-users and the ESM
technology.
Some immediate benefits can be achieved
with this presentation layer:
•Simplification
•User-friendliness
•Access control
However, separating the presentation
layer provides the opportunity to add
some deeper and more fundamental
benefits:
•Independence from underlying
technology
•Central database of configuration
•Comprehensive auditing
•Version control
•Rich reporting
Self-Service from Orb DataWorking from these fundamental
principles of self-service design, Orb Data
have developed a Web Portal application
for self-service Enterprise Monitoring.
Self-Service Enterprise MonitoringBen Fawcett
continued on p12
Users and Responsibilities
End Users: Select and configure specific monitoring that will be implemented on
systems or applications. These are the owners of the system or application being
monitored and have a vested interest in ensuring that monitoring configuration is
accurate and relevant.
Systems Management: Create monitoring Capability definitions which represent
services available to the end user. Specify the links between these definitions and
the underlying monitoring technology. Examples of Capabilities are Unix Disk
Monitoring and MS SQL monitoring.
Portal Administrators: Setup the Portal resources such as Users and Targets and
control access using Projects.
Subject Experts: Relevant experts in application groups or build teams can
provide monitoring Standards to assist end users in creating profiles.
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 11
Message Broker No.9 2009
12
Configuration ProfilesFor End-Users the operation model is
based on reviewing, editing and
distributing configuration Profiles to
target systems.
The user creates a Profile by selecting any
of the available monitoring Capabilities,
choosing from the pre-defined Standards
and reviewing the resultant configuration.
CapabilitiesCapabilities define the monitoring
services that are available for users to
select. Examples of Capabilities are
Process Monitoring, Disk Monitoring and
Windows OS Monitoring.
The ESM team are responsible for creating
Capabilities as they are the link between
the presentation layer and the underlying
monitoring technology. The definition of a
Capability consists of a number of values
and a number of actions.
The values are meta-data that describe
the fields which a user fills in. For
example, for Process Monitoring these
values might include a Text field for
process name and an Integer field for
alert threshold.
The actions determine how the user input
values are first transformed and then
Self-Service Enterprise Monitoring(continued)
implemented. The data transform stage
may be a simple text transform using a
template language or a more complex
scripted transform.
The implementation stage includes
actions to implement the given transform
using the monitoring technology. This
might include copying over SSH,
publishing to an HTTP Server or assigning
ITM 6 Situations.
AccessControlUser access is
restricted by a
Project
membership
model. They can
only work with
Targets and
Profiles that are
in their Projects.
VersionControl Version control
is automated by
the Self-Service
Portal. For the
end-user who is
simply editing
and distributing
Profiles, version control works
transparently without any special
consideration. Simple reports inform the
user if a Standard they have used has
since been updated, or if a Profile they
have sent to a target has changed and
might need to be resent.
For advanced users the version control
allows a powerful level of auditing and
reporting on the enterprise. It is possible
to track the history of configuration on
systems and to rollback configurations to
previous settings.
AuditingAuditing is an essential requirement of
any Enterprise class management
application. The Self-Service Portal
automatically audits resource access and
usage. The audit detail is sent to the
database and is available in the master
audit reports as well as context-related
reports throughout the interface.
If you would like to be involved in theearly release program for the Self-ServicePortal from Orb Data or would like anymore information on the topics discussedin this article please [email protected]
Terminology
Target: Represents any resource that can be monitored, typically this would be a
server but it could be an application or a database.
Capability: Defines a monitoring service that is available. Examples of Capabilities
are Process Monitoring, Disk Monitoring and Windows OS Monitoring. The
capability includes meta-data regarding what can be monitored and how the
monitoring is implemented.
Standard: A Standard is a set of values for a Capability that defines a common
configuration. For a Process Monitoring Capability some examples of
Standards might be Windows Common Processes, Oracle Server Processes
or Veritas Processes.
Profile: A Profile defines a configuration that can be sent to a Target. It can
include multiple Capabilities and related Standards. Users have the option to add
or override values if they wish.
Project: Projects determine which Targets and Profiles a User can access.
Membership to a Project means that the User can see all the Targets and Profiles
in that Project. Users and Targets can be in multiple Projects.
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 12
Message Broker No.9 2009
13
The inherent flaw with conventionalmonitoring tools and even neweranalytics products is that they rely onhuman assumption.
These tools require administrators to
program logic into their performance
models through the use of rules and
scripts. Moreover, they require operators
to set static thresholds in order to
generate monitoring alerts. With all of
this guesswork it is no wonder why IT
organizations are primarily reactive,
always fire-fighting and remain at the
mercy of their business systems – never
knowing where things will break next.
While this has mostly been accepted as
the industry norm, we are now in the
midst of several transformations that
make it impossible to stay on the same
path while remaining competitive. First,
there is the always-on enterprise -- system
slowdowns are no longer an
inconvenience, they’re a threat to
business reputations, operations and the
bottom-line. Second, there is the
movement toward ITIL and business
service management -- operating IT as
service delivery organizations rather than
standalone system silos. And third, the
rapid emergence of virtualized servers,
where performance management is even
more complex than it was on the
physical ones.
The increasing operational demands and
complexity created by the always-on
enterprise, cross-silo IT service
management and virtualization are quickly
making the rules-based tools increasingly
less effective. While some improved
products have emerged to address this
challenge such as “event correlation,”
“dynamic thresholding” and “pattern
matching” tools, Netuitive has made a
leap forward with its self-learning
performance management technology.
Rather than depending on human
guesswork, Netuitive uses a statistical-
based approach which automatically
analyzes and correlates thousands of
system metrics in real-time to learn the
normal behavior patterns of a given
environment, provide an end-to-end
service health dashboard, isolate root-
causes and forecast degradations.
Netuitive: Taking Guessworkout of the EquationNetuitive software leverages existing
monitoring agents, such as Tivoli, to
collect raw numeric data at the sub-
system metric level for each key
performance indicator (KPI), such as CPU
and memory utilization, context switching,
disk and I/O activity and hundreds of
other application metrics. Netuitive learns
the behavior patterns of each individual
KPI for a given day of week, hour of day
and minute of the hour. It also learns how
one KPI behaves in context of the others,
which is essential for gaining a holistic
picture of system and service health.
Contextual Analysis Contextual understanding of KPI
behaviors through an objective lens is
fundamental for effective performance
management. As an analogy, a medical
doctor determines his patient’s health
based on a combination of vital signs (key
performance indicators) such as blood
pressure, heart rate, body temperature
and others. Only by observing and
analyzing (correlating) all of these
conditions together can the doctor
accurately diagnose the patient’s health
and even predict future illness.
Similarly, Netuitive assumes nothing
about the “patient.” Instead it analyzes
and determines outcomes based on its
own observation. Like an automated
diagnostician, this technology
continuously analyzes IT “vital signs,”
to determine the current and anticipated
health of systems and the services
being supported.
Netuitive self-learns the
interdependencies between each “vital
sign.” It understands how each KPI
influences the other, which can be seen in
the software’s “Correlation Assistant”
interface (See diagram). All of these
interdependencies, which are scored on a
percentage basis -- from 0.0 to 1.0 -- are
self-learned. None of these correlation
coefficents are determined through
manual means.
To determine how each KPI behaves in
context to the others, Netuitive uses
multivariate regression analysis and other
mathematical techniques to predict
Netuitive Self-Learning Technology: The NewWave in IT Performance Monitoring Jean-Francois Huard
continued on p14
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 13
Message Broker No.9 2009
14
outcomes of a given KPI through the
observation of other related KPIs.
For each KPI, Netuitive calculates its
“contextual” performance value in real-
time -- using the correlated KPIs of the
same time period and their
previous history.
In addition to determining real-time
contextual performance, multivariate
regression enables Netuitive to calculate
the forecasted performance of KPIs up to
two hours in advance.
All three methods – actual, contextual and
forecasted -- are calculated
simultaneously to understand overall
system health. By using multiple,
simultaneous methods of analyzing each
individual KPI, Netuitive delivers unrivaled
accuracy for IT system management.
Trusted AlarmsTrusted Alarms are one of Netuitive’s most
valued features and only made possible
through Netuitive’s automated contextual
analysis approach. Delivered on both a
real-time and forecasted basis, these
alerts provide an easy-to-understand
composite view of impending issues, and
can be integrated into existing monitoring
consoles and trouble ticketing systems.
They are generated using an accumulated
score of of real-time, contextual and
forecasted deviations – taking into
account the number, frequency and
severity of each deviation. Each system-
verified Trusted Alarm results from
analyzing dozens of real-time and
forecasted calculations. A Trusted Alarm
can be generated for an individual system
issue or an end-to-end business service.
When compared to alerts generated from
manual static-thresholds, Netuitive
Trusted Alarms are not only proactive, but
reduce false alerts by a dramatic ratio.
Netuitive Self-Learning Technology: The New Wave in ITPerformance Monitoring (continued)
Web Request Response Time (WRT)
shows a value of 1.0 since it is of
course correlated 100% to itself.
Clearly WRT is highly correlated to the
OS related metrics in the Websphere
Group (e.g. .97 or 97% for OS CPU).
Any metric greater than 30% is
considered to be a statistically relevant
interdependency.
Netuitive’s statistic-based tolerance bands are dynamic, time-based models that
capture rhythms of performance and automatically adapt as environmental changes
are detected:
1. Real-time: Compares the actual value of a given KPI against its historic behavior.
2. Contextual: Compares the actual value of a given KPI against its real-time
expected behavior, which is calculated from its other interdependent metrics.
3. Forecast: Anticipates performance two-hours out and is computed using
Netuitive’s trends-based analysis algorithms.
Each of the described profiles are made up of a tolerance band that incorporates
both an upper and lower dynamic threshold based on an adjustable number of
standard deviations. These statistical deviations can tuned by the user to adjust
for sensitivity.
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 14
Message Broker No.9 2009
15
Health IndexSystem and service “health” indicators
are only made possible through the use of
contextual analysis. Just as a person’s
overall health can be determined through
observation and analysis of multiple vital
signs, Netuitive determines health by
understanding related KPIs that make up
a service delivery ecosystem – business
activity, applications, servers, databases,
network, etc..
For example, if a person is
hyperventilating it might be because
they’re having a heart attack, not a
breathing problem. Or maybe this is a
normal response to a vigorous jog. By
correlating heart rate to breathing
patterns along with other indicators, and
analyzed with normal behavior patterns
for the given time period, the underlying
problem can be accurately diagnosed.
Likewise, in IT environments, CPU spikes
can be caused by a failing hard drive, a
badly behaving application, a sudden
flood of users or a virus attack. It could
also just be a harmless anomaly.
But without contextual analysis there
is no easy way to determine what’s
really happening.
Netuitive self-learns regression weights
and correlation coefficients between all
the customer experience (latency
measurements) and infrastructure metrics
automatically and in real-time. In
addition, Netuitive generates a health
index by also considering the frequency
and severity of a given set of anomalies.
Health is represented in the software
through the Netuitive dashboard and
service models.
Workload IndexThe Workload Index is also a powerful
derivative from Netuitive self-learning.
These values are automatically calculated
in real-time, without the need for any
manual configuration, using KPIs collected
from standard monitoring tools. The
Netuitive Workload Index is represented
by a value between 0 and 100, and factors
in multiple KPIs by resource type. As an
example, for an operating system --
resources consist of CPU, memory,
network and disk -- where each resource
is represented by multiple KPIs. In
addition to OS-related workloads,
Netuitive uniquely builds workload
indexes for any hardware or software
component such as the application,
storage area network, middleware or
server clusters.
Conclusion: The Value of Self-Learning TechnologyWhile today’s businesses rely on complex
21st Century applications, the tools to
manage them use technology from two
decades ago. It is no wonder that a recent
poll found that most IT shops first learn
about incidents when users call their help
desks. Essentially, they have no reliable
visibility into their infrastructure
environments, let alone the ability to
forecast problems before users notice
them. Now with the growing adoption of
BSM and virtualization, the complexity is
accelerating beyond the breaking point.
Something has to change. Trying to
manage systems or business services with
conventional monitoring tools alone
means you are constantly inundated with
more data than you can possibly analyze
and act on. Used in cojunction with
Netuitive, however, the value of tools such
as Tivoli can be unlocked and increased as
the raw data they provide is used to
manage enterprise estates more
efficiently and more cost effectively.
By applying advanced contextual analysis
-- using multivariate regression --and
other math-based techniques, Netuitive
stands on its own as the industry’s only
true self-learning performance
management software. As a result,
Netuitive self-learning performance
management software delivers one-of-a-
kind benefits, including:
• Automates performance management
administration – eliminates rules, scripts
and thresholds.
• Isolates root-causes across IT silos
(physical and virtual).
• Forecasts issues up to two hours ahead.
• Provides unprecedented visibility of
system and service performance,
including composite health indexes.
• Provides an an intelligence layer on top
of existing monitoring solutions with no
new agents to deploy.
Only through a true self-learning
approach is a solution like Netuitive’s
able to deliver accurate views of Service
Health, Adaptive Behavior Profiles,
Trusted Alarms and Forecasting
About the author: Jean-François Huard isChief Technical Officer and Vice Presidentof Research and Development atNetuitive, Inc. In this role he isresponsible for leading the vision andtechnology innovation effort of thecompany.
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 15
Message Broker No.9 2009
16
User Defined ITNM StitchersNick Landsdowne
What is a stitcher? A stitcher is an ITNM (IBM Tivoli Network
Monitoring) process used for the
collection and processing of network data.
Data collection stitchers are utilised
during the initial phases of the discovery
process, passing data between finders
and discovery agents. Data Processing
stitchers build the network topology
layers during the final phase of the
discovery, ultimately writing the data to
the database table
scratchTopology.entityByName, from
where the data is transferred to the
topology database Model by the stitcher
SendTopologyToModel.
Why use a custom Stitcher?Custom stitchers are generally used to
extend the data retrieved by the discovery
process, processing the details in the
scratchTopology.entityByName
table. The stitchers can be used to add
devices and links that cannot be
discovered or associate customer specific
data to devices.
For example, a firewall that cannot be
discovered can be added to the topology
and links added from connecting devices.
Similarly, the location and owner of a
device can be extracted from an external
database, and associated with a device so
it is visible from the Tivoli Integrated
Portal (TIP).
Basics about StitchersStitchers are located on the ITNM Server,
in the directory $NCHOME/
precision/disco/stitchers, and
have the extension stch. A stitcher can
be domain specific, as with all ITNM
configuration files, achieved by inserting
the ITNM domain name between the
filename and the extension, for example
MyStitcher.My_Domain.stch.
Stitchers may be initiated in a number of
different ways:
– On completion of a discovery agent
– On completion of a discovery phase
– On completion of a specified stitcher orstitchers
– When data is inserted into a specificdatabase table
– On a time based interval
– On request from another stitcher
User defined stitchers are text files, and
commonly fall into the last category, being
initiated from the stitcher
PostScratchProcessing. This is the
last chance to process the topology before
it is sent to Model.
Basic Scratch TopologySchemaTo enable an ITNM administrator to
develop a new stitcher it is useful to have
a basic understanding of the database
table scratchTopology.entity
ByName. This table contains the data that
will ultimately be transferred to the Model
database, and onto the NCIM database. It
is the NCIM database that TopoViz uses
for topology visualisation and where
devices are identified for polling.
Each row of the scratch topology table
details information on a specific entity,
including the field EntityType, an
integer that distinguishes between eight
potential entity types:
• 0: Unknown
• 1: Chassis
• 2: Interface
• 3: Logical interface
• 4: Vlan object
• 5: Card
• 6: PSU
• 7: Subnet
• 8: Module
A Stitcher will often use the field
EntityType to restrict which rows it
processes. For example, when adding
location details it is recommended that
this is only attached to a chassis or
module, as the other entities are
contained entities, i.e. a sub-component
of the chassis or module.
The field entityName uniquely identifies
the entity and will generally be derived
from a hostname. Where hostname
resolution is not available the TCP/IP
address is likely to appear within the field.
MAC Address and TCP/IP Addresses are
detailed in the field Address. This field is
of the data type List of Text, and
hence it is necessary to use an numeric
index to reference a specific value, for
example Address(2) for the primary
TCP/IP address or Address(1) for the
primary MAC address of a chassis.
The ExtraInfo field is used to store
information that is specific to an entity
type. This field is of data type vblist,
where the data is referenced using a
string based key, in a similar fashion to a
Perl associative array. For example, the
field may include additional SNMP
information sysLocation, SysUpTime
and sysName for a chassis, referenced by
the indexes m_sysLocation,
m_SysUpTime and m_sysName. To
reference a specific value within the list,
use the syntax ExtraInfo->IndexName,
for example ExtraInfo-
>m_sysLocation.
Certain entity types are components of
other entities, for example an interface is
part of a chassis. This relationship is
defined within the field
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 16
Message Broker No.9 2009
17
UpwardConnections. Don’t confuse this
with the field RelatedTo that identifies a
list of entities that are connected to the
network entity.
These and the other fields are fully
documented within the ITNM 3.8
Discovery Guide.
Querying the Database TablescratchTopology.entityByNameIt is useful to familiarise oneself with the
data stored within the database table
scratchTopology.entityByName
before attempting to process it. There are
two methods for querying the data,
through the OQL command line interface
and from the Tivoli Integrated Portal (TIP).
From the TIP, select the navigator option
Administration>Network>Management
Database Access, as demonstrated in the
figure below.
The administrator must select the relevant
ITNM domain, the DISCO service and
input the required SQL statement, for
example select * from
scratchTopology.entityByName
where EntityType =1; as
demonstrated above right.
To run the same query from the OQL
command line, run:
ncp_oql –domain CORE_DOMAIN
–service DISCO –username admin
–password ‘password’ –latency
120000 –query “select * from
scratchTopology.entityByName
where EntityType =1;”
Executing query:
select * fromscratchTopology.entityByNamewhere EntityType = 1;.( 25 record(s) to follow ){
EntityName='route2.ccna.com';Address=['','','10.201.1.21'];Description='Cisco
Internetwork Operating SystemSoftware ..IOS (tm) C2600Software (C2600-JS-M), Version12.1(8), RELEASE SOFTWARE(fc1)..Copyright (c) 1986-2001by cisco Systems, Inc...CompiledTue 17-Apr-01 05:38 bykellythw';
EntityType=1;EntityOID='1.3.6.1.4.1.9.1.208';IsActive=1;Status=1;ExtraInfo={
m_IpForwarding=1;m_SysName='router2';m_DNSName='route2.ccna.com';m_SysLocation='';m_SysContact='';m_SysUpTime=8744984;m_SysServices=78;m_IfNumber=13;m_OSPF={
m_IsBdrRtr=0;m_OspfDomain=0;};
m_AssocAddress=[{m_IfIndex= 1, m_IpAddress ='10.201.1.21', m_Protocol = 1,m_IfOperStatus = 1 }];
m_BaseName='route2.ccna.
com';m_AddressSpace=NULL;m_AccessProtocol=1;m_Location='Swansea';m_Department='Sales';m_DeviceType='unknown';m_Continent='unknown';m_City='unknown';};
Contains=['route2.ccna.com[ 0 [ 1 ]]','route2.ccna.com[ 0 [ 14 ]]','route2.ccna.com[ 0 [ 5 ]]','route2.ccna.com[ 0 [ 6 ]]','route2.ccna.com[ 0 [ 7 ]]','route2.ccna.com[ 0 [ 8 ]]','route2.ccna.com[ 0 [ 9 ]]'];
Adding Basic Data to theExtraInfo FieldThe value of the ExtraInfo field within
the example above includes a number of
custom details, for example m_Location
and m_Department. This information
was added by custom stitchers, initiated
from the PostScratchTopology stitcher,
designed to analyse the subnet of each
chassis, and append the data accordingly.
To achieve this it is necessary to
understand the basic format of a
text stitcher.
A stitcher includes two main sections,
defining stitcher triggers and stitcher
rules. The triggers identify when the
stitcher is run, the rules identify the logic
of the processing. Hence the shell of a text
stitcher is as follows:
//Start Trigger definitionUserDefinedStitcher
continued on p18
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 17
Message Broker No.9 2009
18
{//When does this trigger run?StitcherTrigger{//Triggers entered here}// What does this trigger do?StitcherRules
{//Rules entered here
}//End trigger definition
For this example, the trigger will be
initiated from the PostScratch
Processing stitcher, hence the first
section is defined as follows:
StitcherTrigger{ActOnDemand();}
The rules section updates the ExtraInfo
field, adding the m_Location value for
specific subnets:
StitcherRules{ExecuteOQL("updatescratchTopology.entityByName setExtraInfo->m_Location='Swansea'where ((EntityType = 1 orEntityType = 8) and Address(2)like '10\.201\.1\.');updatescratchTopology.entityByName setExtraInfo->m_Location='Cardiff'where ((EntityType = 1 orEntityType = 8) and Address(2)like '10\.202\.2\.');updatescratchTopology.entityByName setExtraInfo->m_Location='Llanelli'where ((EntityType = 1 orEntityType = 8) and Address(2)like '10\.203\.3\.');updatescratchTopology.entityByName setExtraInfo->m_Location='PortEynon' where ((EntityType = 1or EntityType = 8) andAddress(2) like'172\.16\.31\.');");}
The extract above demonstrates the
ExecuteSQL rule being used to run
multiple “update” commands, each
updating the ExtraInfo->m_Location
value dependent on the value of the
subnet (the where statement).
The full stitcher syntax is as follows:
// Stitcher to add the location
to a Chassis or module// based on the TCP/IP subnetUserDefinedStitcher{//When does this trigger run?StitcherTrigger{ActOnDemand();}
// What does this trigger do?StitcherRules{ExecuteOQL("updatescratchTopology.entityByName setExtraInfo->m_Location='Swansea'where ((EntityType = 1 orEntityType = 8) and Address(2)like '10\.201\.1\.');updatescratchTopology.entityByName setExtraInfo->m_Location='Cardiff'where ((EntityType = 1 orEntityType = 8) and Address(2)like '10\.202\.2\.');updatescratchTopology.entityByName setExtraInfo->m_Location='Llanelli'where ((EntityType = 1 orEntityType = 8) and Address(2)like '10\.203\.3\.');updatescratchTopology.entityByName setExtraInfo->m_Location='PortEynon' where ((EntityType = 1or EntityType = 8) andAddress(2) like'172\.16\.31\.');");
}
}//End of the trigger
As indicated, this stitcher is designed to
be run from the PostSratchProcessing
stitcher. This is achieved by creating a
domain specific copy of the stitcher, and
adding the following line to the end of
the file:
ExecuteStitcher(“AddLocationFrom
TCPIPAddress”);
This assumes that the custom stitcher is
named AddLocationFromTCPIP
Address.domain_name.stch.
The stitcher updates will automatically be
loaded, as the stitcher directory is
monitored for changes every minute (by
default). The additional information will
be added during the next discovery.
Alternatively, the existing data in the
scratchTopology database can be resent
to Model by running the command:
ncp_oql –domain <domain_name> -service disco –username admin–password ‘password’ –latency120000 –query “insert intostitchers.actions values(‘SendToModel’);”
Mapping Data into the NCIMDatabaseThe stitcher updates discussed so far will
transfer the extra information to Model, to
the table master.entityByname, but
further modifications are required to
transfer that information to the NCIM
database, and hence make it available to
TopoViz for display.
The easiest method for achieving this end
is to transfer the data to the pre-defined
NCIM database table entityDetails.
This is made relatively simple as such
mappings can be defined within a Model
database table also called
entitiesDetails.
The Model table entitiesDetails has
two fields, entityType and
entityDetails, identifying the numeric
for the entity type that the map applies to,
and a list of the mappings. These
additions should be made to the domain
specific configuration file
DbEntityDetails.<domain>.cfg in
the directory $NCHOME/etc/precision.
The following update is required to map
ExtraInfo->m_Location to an NCIM
field Location:
User Defined ITNM Stitchers (continued)
MessageBroker9:MessageBroker9 5/8/09 13:09 Page 18
Message Broker No.9 2009
19
information can be displayed from the TIP,
using the structured browser, as
demonstrated in the figure above.
ConclusionsThis above example is a very basic
example of how a stitcher can be used to
add additional information to devices for
display through the Tivoli Integrated
Portal. The example used static
information, defined within the stitcher
itself, and the use of the information was
limited to display through the structured
browser. Future articles will discuss more
practical examples, where the extra
information is derived from a database, is
insert intodbModel.entityDetails(
EntityType,EntityDetails
)values(
1, -- chassis{
Location = "eval(text,'&ExtraInfo->m_Location')"
});
As the field EntityDetails is of the data
type list, its value or values are assigned
comma-separated within the curly-
brackets, for example:
{
Location = "eval(text,
'&ExtraInfo->m_Location')",
Contact = "eval(text,
'&ExtraInfo->m_Contact')"
}
Displaying the ExtraInformationOnce the extra information has been
transferred to the NCIM database, the
utilised for dynamic view filters and how
the data can be added to events inserted
into the ObjectServer.
About the author: Nick Lansdowne is a
Principal Consultant and has been with
Orb Data since 2004. Nick has gained
considerable knowledge and experience
across the range of management
functions provided by Tivoli software,
particularly with Tivoli Monitoring and
Netcool products. Nick has a host of Tivoli
certifications and is also an IBM Certified
Instructor.
Email: [email protected]
Orb Data is pleased to announce theestablishment of a new subsidiary; The Orb Flex Agency.
This decision is in direct response to the needs of our Clients, and theshortcomings of the incumbentproviders.
Orb Flex’s services are complimentary tothe high quality Consulting servicesprovided by its parent company, givingour customers additional flexibility, tomanage their resource cost effectively.
Why you should use Orb Flex:
We know TivoliAs a specialist Tivoli agency, weunderstand the products, and what youare asking for to fulfil your vacancy.Unlike the traditional agency, which triesto be all things to everybody and isn’talways effective in matching candidatesto a role, you can rely on Orb Flex to only
put forward candidates for yourrequirement with relevant skills andexperience.
Consistent matching ofcandidates to rolesWe maintain an accurate database ofcandidates, the products that they haveexperience in, and the training andcertifications that they hold. This allowsus to match candidates to roles using amuch more sophisticated approach than asimple keyword search. How often haveyou been sent CVs which have Tivolimentioned, but that are completelyunsuitable for your needs? Orb Flex willnot waste your time in this way.
Quality assuredBefore recommending candidates to ourcustomers, Orb Flex carries out anindependent assessment of their technicalabilities. We ensure that when someone’s
CV claims they know a product, theyreally do. So you can have confidence inour recommendations, and save yourselfhours of wasted time interviewingunsuitable applicants.
Trained resourcesOrb Flex is unique in the market; itprovides its contractors with education.Orb Flex gives up to 2 weeks of trainingper year, delivered by Orb Data. Thisensures that we can provide candidatesthat have the up-to-date productknowledge that you need and expect.
Backed by the expertsOrb Data is an IBM Premier Partner, withthe prestigious AAA skill rating acrossmultiple products. No other organisationhas such a high level of skill.
Find out more by [email protected]
The future for Contract Tivoli Specialists
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 19
Message Broker No.9 2009
20
Every unmanaged password, an accountwith a password property set to neverexpire, is a risk to your business.
At best these are the high level accounts
running your windows services, or at
worst every local windows administrator.
Giving a fully audited controlled remote
SSH Terminal and Remote Desktop access
function to your support staff has cost
and security benefits.
ForestSafe manages the need for your
support teams to know any passwords
apart from their own computer accounts,
the password of which is known only
to them.
Enhanced Security by notstoring passwordsForestSafe does not suffer from the ‘keys
to the kingdom’ problem as passwords
are not stored in the database, but
generated as they are required. This
ensures that ForestSafe does not
introduce another point of attack into
your environment.
The Security modelThe system has several security layers.
Some teams may require constant remote
access to a known list of servers. Another
team may want infrequent access to any
server driven by a change record.
The system is designed to give audited
control to enable any access to be
established at any level.
ForestSafe is configured to trust a set of
Windows domain accounts. Their domain
group membership maps to ForestSafe
Administrator groups. Access is via a Web
Page either by credential entry or Single
Sign On. For Unix only shops, a Virtual
ForestSafe Domain can be trusted
through to LDAP.
Segregation of roles is required by COBIT
and Sarbanes-Oxley Act. It is vital that
partitions exist between the various
functions of a system so employees in
one section cannot interfere with the
work of others.
Every ForestSafe function can be added
or removed from the ForestSafe
Administrators desktop using
Administrator Role Management.
The mapping of any system function
to the Administrator group is completely
orthogonal.
Access Approval is
setup to apply an
additional layer of
authority between users
and their password
retrievals.
A ForestSafe approval
period can to be
configured to start
immediately or in the
future, and set to
terminate at given time.
During this period the
ForestSafe user
requiring Approval has
view of the approved
target. Administrators
can be setup as being
approvers or requiring
approval.
Access Control Lists define which
machines Administrators are allowed to
access and which accounts they are
allowed to use to logon.
ForestSafe is configured to create
hierarchical “Host Containers”.
Administrator Roles are mapped against
any container in the hierarchy and will
inherit any hosts present in the sub-
containers. The ForestSafe Administrator
is presented with a restricted list of
choices based on either their current
approvals, or if approval layer is not
enabled, the contents of the host
container associated with their
Administrator Role.
Target Identity ratification ensures the
host being accessed is the intended host
and not a “Man in the Middle”.
Every ForestSafe host configured for via
SSH, requests a public key or fingerprint
from the host on discovery. This key is
stored against the host record and
compared every time a remote access
takes place.
Remote Access Validation is the final
doorway to remote system.
Remote terminal validation is either by
credential entry or Single Sign On. If
Single Sign-on is disabled, the
Administrator could be given access to
the self service password vault to retrieve
the required account password. If single
sign-on is used alongside the approval
layer, the approver could grant the
remote access, and the approved user
could logon without revealing the
password. Moreover we allow support
staff to logon to Windows Domain
machines using their own domain
account, placing the account temporarily
into the local Administrator group.
Policing the securityReleased password monitoring. If a
released password has not been used
within a specified time, the password can
be forced to be scrambled preventing
logon. All account logon activity is
ForestSafe – Password Provisioning and Remote Access Paul Hawkins
User Validation is the doorway to the remote access.
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 20
Message Broker No.9 2009
21
monitored on the dashboard. Detailed
reports on user accesses are accessible by
clicking the dashboard.
Scrambled password monitoring. All
managed computer accounts can be
regularly checked to ensure that every
account password set by ForestSafe has
not been changed locally on the machine..
All exceptions are monitored on the
dashboard.
Securing the ApplicationForestSafe leverages Windows Active
Directory security to control access to the
ForestSafe application. This ensures
granular access control of the components
of the application and the resources
controlled by the application. Active
Directory groups are used to control
access for various ForestSafe
administrators.
ForestSafe PlatformsForestSafe scales across any configuration
of any platform. It will manage all your
Windows Domain computers, and all your
AIX or Solaris computers. It can also
manage the accounts of your routers and
firewalls, and old legacy kit that only
continued on p22
supports Telnet. If you have any in-house
systems with accounts that need
management, a sophisticated template
command mechanism enables you to
bring these accounts under the
management of ForestSafe.
WindowsForestSafe provides a complete Enterprise
Password Management system for all
Microsoft Windows local and shared user
accounts. The password of every 'built in
Administrator' account can be managed
with a single "Local Windows Account
Policy" entry, each administrator is given
a unique password.
If the 'Built in Administrator' account has
been used as the Logon User to a
Windows Service, then "Shared Account
Policy" is used to synchronise the local
user account and the service logon
account. Most Windows systems e.g.
Services, MSSQL, IIS App Pools are
managed out of the box. Moreover, a
customisable Shared Account interface
means any in-house custom Windows
application account passwords can be
also be managed: uniquely or
synchronised with any other ForestSafe
managed accounts.
Non WindowsManagement for Unix, Cisco and Firewall
accounts, and any platform that supports
SSH or Telnet is provided. The password of
every host's root account can be managed
with a single "Local Windows User Policy"
entry, each root account is given a
unique password.
Any account passwords can be managed
and an advanced extendable template
system means that any in-house custom
application can be managed: uniquely or
synchronised with any other ForestSafe
managed user/s.
The most secure method for Unix user
account management is using SSH Key
encryption, the ForestSafe Unix default.
ForestSafe has central SSH Key
management. It also stores fingerprints to
prevent 'man in the middle attacks'.
ForestSafe can also communicate via
telnet. Using telnet, user accounts on
any hardware supporting Telnet,
e.g. Cisco Router and OS/390 Mainframe
are supported.
Any Windows account password can be
shared with any UNIX user account.
Remote Terminal AccessWindows systems are accessed through a
Web based Remote Terminal, and Unix
systems are given access through a Java
ForestSafe Administrators
map to corresponding Active
Directory Security Groups
Any function of the system
can be mapped to any
ForestSafe Administrator
Machine Types: Single Windows Domain workstations and servers
UNIX SSH and Telnet
Any device that supports SSH or Telnet
SupportedPlatforms:
XP/Server/Vista
AIX/SOLARIS/HPUX/Linux (with Telnet: Cisco, OS/390)
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 21
Message Broker No.9 2009
22
ForestSafe – Password Provision and Remote Access (continued)
The system has successfully retrieved
the password
The Remote Desktop opens in a new
web page. Any number of Remote
Desktops can be open at the same
time.
Here the user has selected a Unix
server
Unix server access runs in an SSH
terminal window in a new web page.
based SSH terminal. EESM have licensed
and embedded MINDTERM in the
ForestSafe Web application, the industry’s
leading Java SSH Terminal, from APPGATE.
Support users are required to enter a
Reason why Terminal Access is required,
and like every page in the ForestSafe
system, every change or remote access is
logged to the ForestSafe Audit Log.
All account access information can be
retrieved through the ForestSafe
Audit report.
Account ProvisioningTemporary account provisioning
ForestSafe can create a temporary
Administrator account with a common
password across a range of machines for a
controlled time period. E.g. giving staff
local machine access to carry out
weekend tasks.
Permanent account provisioning
It can also create a new Administrator
account with a unique password across a
range of machines and manage the
password, or let the user manage the
password. Any new or replaced machines
will receive provisioned accounts
automatically.
Password Retrieval through API(CLI)
A console application is available that can
be configured to retrieve the password of
any account that is managed by
ForestSafe. E.g. The ForestSafe API could
be called from an EXPECT script, to solve
the issue of hardcoded passwords. For
additional security, the API is also
available as a DLL.
High Availability
The system will run on a single Windows
2003 Server, or can be deployed over 6
servers in a redundant and highly
available configuration.
SummaryForestSafe provides enhanced security
and productivity by allowing
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 22
Issue 4• An Introduction to
Federated IdentityManagement
Issue 1• Companies Spend to
Solve the IdentityConundrum
Issue 5• Omnibus or TEC?
What you need toknow ...
• News in a Minute
• CompositeApplicationManagement
• Using Odyssey toDeliver with ITM 6
• IBM Tivoli Monitoringas a BusinessServices Viewer
• Building Data Flowswith IBM TivoliDirectory Integrator
• Using ITM6x FirewallGateway Feature
Issue 2• Managing
DistributionOperations
Issue 3• Creating an Inventory
Schedule in 30minutes
Issue 6• Application Logfile
Monitoring
• Bare Metal Installs
• Orb Data ITMProductivity Pack
• Implementing ITIMx
• News in a Minute
• Managing andMonitoring MQ withITM 6.x
• Service Views withoutthe cost
• Software RentalOptions SlowlyBecoming a Reality
• Odyssey 3.2 - ITM 6Management
• New Training Centre
• Technical Corner -Running remotecommands with ITM 6
Issue 7• Leveraging Real-Time
and Historical Data
• TW 8.4 New Features
• Bare Metal InstallsPart 2
• Potty Training?
• Enterprise LevelNetwork Monitoringfor SME Pricing
• Tivoli Education
• Tivoli CommonReporting and BIRT
• Visualise YourBusiness Serviceswith TBSM
• News in a Minute
• Orb Data Opens forBusiness in Scotland
• TPMfOSD DriverInjection
Issue 8• 6 Reasons to Upgrade
to Netcool/OMNIbus
• An Introduction toTADDM
• Netcool OMNIbus 7.2- Accelerated EventNotification
• IBM Tivoli AssetManagement for IT
• Based in the EuroZone?
• IBM Enterprise Single Sign onSolution
• Agentless Monitoringfor ITM 6.2.1
• An Introduction to Dynamic Thresholds
• News in a Minute
• Integrating TBSM andITM 6 usingBSM_Identity
Don’t worry, you can download them here: http://www.orb-data.com/messagebroker
Message Broker No.9 2009
23
administrators to have the correct access
to perform their jobs. Enhanced access is
only allowed for the duration required.
This means that your business can show
adherence to security and compliance
standards while maintaining an efficient
cost conscious business process. In
addition if you are considering deploying
an Enterprise Management system such
as IBM’s TADDM, and don’t want the
overhead of managing agents across your
infrastructure, ForestSafe can also be
integrated to allow it to manage the
credentials for you.
About the author: Paul Hawkins is a
Senior Consultant with Orb Data, having
joined the company during 2007.
In addition to ForestSafe Paul specialises
in IBM’s security portfolio.
For further information contact:
Did you miss a previous issue?
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 23
Message Broker No.9 2009
24
Netcool Impact – Escalating Events based on Business Criticality(continued from back page)
- Make sure the Enabled checkbox isselected
- In the Table Description section, selectcmdb from the Base Label drop downbox
- Select Department from the drop downbox next to it
- Click Refresh, this should bring back thetable fields
- Make DeptName the key field
- Select DeptName as the Display NameField
- Click the Save icon (floppy disk) and thenclose the tab
Create a Dynamic Link This will establish a relationship between
the Device and Department information.
In this example, Device Facility and
Department Location are related i.e. if a
device at a facility fails, a department at
the same location will be impacted.
- Drop down the Data Sources And Typesmenu
- Click the Devices Data Type (the actualword Devices)
- Select the Dynamic Links tab
- Click the New Link by Filter icon
- Select Department as the Target DataType
- Enter a filter of Location = '%Facility%'
- Click OK
- Click the Save icon and then close thetab
To test the Link- Drop down the Data Sources And Types
menu
- Click the View Data Items iconfor Devices
- Click the view linked data iconfor one of the servers
- This should bring a window showing theassociated Departments
- Close the tab
Create the PolicyFor this example, the Operations
department is considered to be critical to
the business. If any devices located at
the same facility fail, we want to
automatically increase the severity of
the associated alert.
To do so the policy must first determine
the facility of the device by querying the
Device data source. Then it must find the
departments at that location, this time by
using the dynamic link.
Finally, each department at the location
is checked. If the Operations department
is impacted, the severity of the alert
is increased.
- Drop down the Policies menu
- Select the Custom template
- Click the + icon
- Name the policy orbEventEnrichment and
click Save
I found trying to write code within the
browser was a challenge, especially when
it came to indentation, so instead I wrote
the code in vi and then copy and pasted it
in. Expanding tabs in vi is a good plan.
My .exrc file looks like this:
set expandtab
set shiftwidth=4
set softtabstop=4
set tabstop=4
set nohlsearch
The policy code:/*
Policy: orbEventEnrichment
Author: Ant Mico
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 24
Message Broker No.9 2009
25
Date : February 2009
Desc : Sample policy demonstrating so key Impact policy functionality. Looselybased on the example given
in the Solution Guide (with the errors removed!).
*/
// Set up some variables thatare used by the loggingfunction
policyName ="orbEventEnrichment";
debugLevel = 1;
// Log a start up message
// Note the use of a librarypolicy which contains regularlyused functions
// This is a normal policy,just need to fully qualify thefunction to access it
orbFunctionLibrary.orbLogger(debugLevel, policyName, "START");
// Query the Devices DataType
// We assume that the Nodefield in the Omnibus alertcorrelates with the Hostname
dataType = "Devices";
filter = "Hostname = '" +@Node + "'";
countOnly = False;
// GetByFilter will return anarray with the matching DataItems
devices = GetByFilter(dataType,filter, countOnly);
// The Length() functionreturns the number of elementsin the array
If ( Length(devices) < 1 )
{
orbFunctionLibrary.orbLogger(debugLevel, policyName, "No devices found.");
}
Else
{
index = 0;
While ( index <
Length(devices) )
{
msg = "Device " +devices[index].Hostname + "isin the " +devices[index].Facility + "facility.";
orbFunctionLibrary.orbLogger(debugLevel, policyName, msg);
index = index + 1;
}
// Now we can use the link to get the impacted Departments
// Create an array in which the target DataType is stored
dataTypes = { "Department" };
// Set the filter and maximumrows to return
filter = NULL;
maxToReturn = 10000;
departments = GetByLinks(dataTypes, filter,maxToReturn, devices);
If (Length(departments) < 1 )
{
orbFunctionLibrary.orbLogger(debugLevel,policyName, "No departmentsfound.");
}
Else
{
index = 0;
While ( index < Length(departments) )
{
// Store the array element in a separate
variable to
// make accessing it easier
dept = departments[index];
msg = "Department " + dept.DeptName + " is impacted.";
orbFunctionLibrary.orbLogger(debugLevel, policyName, msg);
// Check to see if it is the Operations department
If ( dept.DeptName == "Operations" )
{
// It is, so set the event Severity field to 5 (Critical)
// Note that the @ syntax is shorthand for the EventContainer
// variable which contains the event underconsideration.
// The other way to access it would be EventContainer.Severity
@Severity = 5;
// This next bit is important. Once the event has been
// changed we must return it so that it gets updated in
// Omnibus. Note the use of the EventContainer variable.
ReturnEvent(EventContainer)
}
index = index + 1;
}
}
}
orbFunctionLibrary.orbLogger(debugLevel, policyName,"FINISH");
The function library:
/*
Policy: orbFunctionLibrary
Author: Ant Mico
Date : February 2009
Desc : Group of functions that may be useful to other policies
*/
continued on p26
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 25
Message Broker No.9 2009
26
Netcool Impact – Escalating Events based on Business Criticality(continued)
/*
Function to implement standard policy logging.
Takes three arguments:
- debugLevel, an Integer 0 (off) or 1 (on)
- policyName, a String containing the name of the policy
- message, a String containing the message to log
debugLevel could be extended to include more verbose
information like CurrentContext() etc.
*/
Function orbLogger(debugLevel,policyName, message)
{
debugLevel = Int(debugLevel);
If ( debugLevel > 0 )
{
Log(LocalTime(getDate()) + " " + policyName + " : " + message);
}
}
Create the EventReaderserviceThis is the service that will read events
from the ObjectServer
- Drop down the Services menu
- Select OmnibusEventReader from themenu
- Click the + icon
- Enter the service name asorbOmnibusEventReader
- Change the Data Source to be NCOMS
- Select the Event Mapping tab
- Click the New Mapping button
- Enter the Filter Expression “Node LIKE‘^server[0-9]+$’” only alerts that matchthis filter will trigger the policy
- Select the orbEventEnrichment policy asthe policy to run
- Check the Active checkbox
- Click OK
- Click OK
- The service should appear in the ServiceStatus window at the bottom left of the page
Start the orbOmnibusEventReader service
- Click the start button
- Click the log button to see what it is doing
- You should see the EventReaderperiodically querying the ObjectServerfor alerts
Generate test eventsOpen up an Event List
Using postzmsg or equivalent (I used
postzmsg from the TEC 3.9 FP6 non-TME
bundle), generate some test events:
postzmsg -f
$OMNIHOME/bin/tec.cfg -r WARNING
-m "Hardware failure detected"
origin=server1 Device_Down TEC
postzmsg -f
$OMNIHOME/bin/tec.cfg -r WARNING
-m "Hardware failure detected"
origin=server2 Device_Down TEC2
postzmsg -f
$OMNIHOME/bin/tec.cfg -r WARNING
-m "Hardware failure detected"
origin=server3 Device_Down TEC3
You should see the alert from server3
being escalated to a Critical severity.
The log for the PolicyLogger service
should contain the following types
of entries:
Parser log: 2009-02-19
17:10:08.000 orbEventEnrichment
: Device server1 is in the
Crewe facility.
Parser log: 2009-02-19
17:10:08.000 orbEventEnrichment
: Department HR is impacted.
Parser log: 2009-02-19
17:10:08.000 orbEventEnrichment
: Department Facilities is
impacted.
Parser log: 2009-02-19
17:10:08.000 orbEventEnrichment
: FINISH
Parser log: 2009-02-19
17:10:57.000 orbEventEnrichment
: START
Parser log: 2009-02-19
17:10:57.000 orbEventEnrichment
: Device server2 is in the
Nantwich facility.
Parser log: 2009-02-19
17:10:57.000 orbEventEnrichment
: Department Engineering is
impacted.
Parser log: 2009-02-19
17:10:57.000 orbEventEnrichment
: FINISH
Parser log: 2009-02-19
17:11:21.000 orbEventEnrichment
: START
Parser log: 2009-02-19
17:11:21.000 orbEventEnrichment
: Device server3 is in the
Winsford facility.
Parser log: 2009-02-19
17:11:21.000 orbEventEnrichment
: Department Operations is
impacted.
MP.returnEvent did eri.putEvent
for EventContainer:
(OwnerUID=65534, Class=6601,
Service=, Serial=430,
RemoteSecObj=, TECFQHostname=,
LocalNodeAlias=, TaskList=0,
TECEventHandle=, PhysicalPort=0,
NmosEntityId=0, LocalPriObj=,
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 26
Message Broker No.9 2009
27
NmosObjInst=0, TECDate=,
LocalRootObj=, EventId=,
Flash=0, ProcessReq=0,
TECHostname=, RemoteRootObj=,
ExpireTime=0, SuppressEscl=0,
ReceivedWhileImpactDown=0,
InternalLast=1235063480,
Grade=1, TECStatus=,
Node=server3, RemoteNodeAlias=,
RemotePriObj=, TECServerHandle=,
Severity=5, ExtendedAttr=,
StateChange=1235063480,
KeyField=430, Acknowledged=0,
NmosManagedStatus=0,
FirstOccurrence=1235063480,
ServerName=NCOMS_A, URL=,
Poll=0, PhysicalCard=,
NmosSerial=,
Identifier=:TEC3:Device_Down,
OwnerGID=0,
LastOccurrence=1235063480,
X733ProbableCause=0, Agent=TEC3,
AlertGroup=Device_Down,
PhysicalSlot=0, NmosDomainName=,
NmosCauseType=0,
Summary=Hardware failure
detected, Tally=1,
TECRepeatCount=0,
NodeAlias=server3, Location=,
Type=1, LocalSecObj=,
X733SpecificProb=,
Manager=tivoli_eif probe on
carl, X733EventType=0,
Customer=, AlertKey=TEC3,
EventReaderName=orbOmnibusEventR
eader, ServerSerial=430,
X733CorrNotif=,
TECDateReception=)
Parser log: 2009-02-19
17:11:21.000 orbEventEnrichment
: FINISH
About the author: Anthony Mico is aSenior Consultant with Orb Data and hasbeen working with systems managementsoftware for over 5 years. Anthonyspecialises in Availability products,including Tivoli Monitoring. He holds anumber of Availability related Tivolicertifications. Email: [email protected]
Google takes aim atMicrosoft’s WindowsAfter years of denial, Google hasannounced the Chrome OperatingSystem.
The Chrome OS, Google says, is aimed atimproving usability and security, bothperceived weak points for Microsoft. Thesearch giant said in a blog post that“Google Chrome OS is an open source,lightweight operating system that willinitially be targeted at netbooks.”
There is some good news for Microsoft,however. By announcing Chrome a year inadvance, Microsoft can make use of thistime, to get its own new offering, Gazelleright. Addressing Google's Chrome OS,Microsoft chief executive Steve Ballmersaid that the move leaves its rival withdual operating systems, somethingMicrosoft learned the hard way is not agood idea.
"Who knows what this thing is?" Ballmersaid during a talk at the company'sWorldwide Partner Conference in NewOrleans, which was broadcast over theweb. "To me the Chrome OS thing ishighly interesting — it won't happen for ayear and a half and they alreadyannounced an operating system(Android)."
He rejected the idea that Microsoftneeded to mimic Google's approach.
"We don't need a new operatingsystem," Ballmer said. "What we do need
to do is to continue to evolve Windows,Windows Applications, IE, the way IEworks in totality with Windows, and howwe build applications like Office... and weneed to make sure we can bring ourcustomers and partners with us."
Ballmer said research data shows that atleast 50 percent of the time people areusing their PC, they are doing somethingthat is not in the browser. "Windows isthe operating system for the job."
China web users 'outnumberUS population'The number of Internet users in China isnow greater than the entire population ofthe United States, after rising to 338million by the end of June, the officialXinhua news agency reported reported.
China's online population, rose by 40million in the first six months of 2009.The number of broadband Internetconnections rose by 10 million to 93.5million in the first half of the year, thereport said.
However the growing strength andinfluence of the web population hasprompted concern in Beijing aboutpotential social unrest, and thegovernment has stepped up its controlover the Internet in recent years.
After rioting early this month in thecapital of the restive northwest Xinjiangregion, the government cut off onlineaccess to most of the area, in one of thelargest known Internet blackouts in
News in a MinuteChina yet.
It has also blocked access to Twitter,Facebook, YouTube and a range of othersites used for networking and sharingcontent.
Artificial brain '10 yearsaway'An artificial human brain could be builtwithin the next 10 years, claims HenryMarkram, director of the Blue BrainProject.
The Swiss project was launched in 2005and aims to reverse engineer themammalian brain from laboratory data.Over the last 15 years, the team havepicked apart the structure of theneocortical column.
"It's a bit like cataloguing a bit of therainforest." he said.
The project now has a software model of"tens of thousands" of neurons whichhas allowed them to digitally constructan artificial neocortical column. Theteam feeds the models and a fewalgorithms into an IBM Blue Genemachine with 10,000 processors.
Simulations have started to give theresearchers clues about how the brainworks. For example, they can show thebrain a picture and follow the electricalactivity in the machine. Ultimately, theaim would be to extract thatrepresentation and project it so thatresearchers could see directly how abrain perceives the world.
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 27
Message Broker No.9 2009
28Published by Orb Data Limited, The Chapel, Grenville Court, Britwell Road, Burnham, Bucks, SL1 8DF Telephone: +44 (0) 1628 550450 www.orb-data.comIBM and Tivoli are trademarks of International Business Machines Corporation in the United States, other countries, or both.
IBM Tivoli Netcool/Impact provides acommon platform for data access thatcircumvents organizational boundaries.
It enhances OMNIbus solutions to allow
data from virtually any source, to
correlate, calculate, enrich, deliver, notify,
escalate, visualize and perform a wide
range of automated actions.
This technical article demonstrates how
information held in a MySQL database
can be used to automatically escalate an
alert based on the relative criticality to
the business.
Technical DetailsThe database that is used in this example
is cmdb and the ObjectServer data source
connection assumes the name NCOMS
however you may need to change this
based on your environment.
MySQL InstallationAs root, install MySQL from the CentOS
DVD using yum:
mount /dev/cdrom /media/cdrom
rpm --import /media/cdrom/RPM-GPG-KEY-CentOS-5
yum --disablerepo=\* --enablerepo=c5-media installphp-mysql mysql mysql-server
Optional step to autostart mysql
chkconfig --levels 35 mysqldon
Start mysql
service mysqld start
Either as root or the netcool user, log into
mysql
mysql -u root
Now create the database, tables and data
that will be used later
mysql> create database cmdb;
mysql> use cmdb;
mysql> create table Device (Hostname VARCHAR(255), FacilityVARCHAR(255) );
mysql> create table Department (DeptName VARCHAR(255), LocationVARCHAR(255) );
Netcool Impact – Escalating Events based onBusiness Criticality Ant Mico
mysql> show tables;
mysql> insert into Device values( 'server1','Crewe' );
mysql> insert into Device values( 'server2','Nantwich' );
mysql> insert into Device values( 'server3','Winsford' );
mysql> insert into Device values( 'server4','Winsford' );
mysql> insert into Device values( 'server5','Crewe' );
mysql> insert into Device values( 'server6','Northwich' );
mysql> insert into Departmentvalues ( 'Engineering','Nantwich' );
mysql> insert into Departmentvalues ( 'HR', 'Crewe' );
mysql> insert into Departmentvalues ( 'Operations','Winsford' );
mysql> insert into Departmentvalues ( 'Catering', 'Northwich');
mysql> insert into Departmentvalues ( 'Facilities', 'Crewe');
Create a new project- Log into Impact as the admin user
- Select the NCI server instance if youneed to
- Select the Projects tab
- Click the New Projects + icon
- Enter a project name(orbEventEnrichment)
- Click OK
Create the Event Source (theData Source from which wetake alerts)- Make sure the orbEventEnrichment
project is selected in the Projects dropdown box
- Drop down the Data Sources And Typesmenu
- Select ObjectServer
- Click the + icon
- Enter the Data Source Name as NCOMS
- Enter the Username as root
- Disable Backup
- Enter the local hostname for the PrimarySource
- Click the Test Connection button to makesure everything is OK
- Click OK
Create the Data Source toaccess the cmdb- Drop down the Data Sources And Types
menu
- Select MySQL
- Click the + icon
- Enter the Data Source Name as CMDB
- Enter the Username as root
- Disable Backup
- Enter the Database as cmdb
- Click the Test Connection button to makesure everything is OK
- Click OK
Create the Device Data Type- Drop down the Data Sources And Types
menu
- Click the + icon next to CMDB
- Enter Devices as the Data Type Name
- Make sure CMDB is selected as the DataSource Name
- Make sure the Enabled checkbox isselected
- In the Table Description section, selectcmdb from the Base Label drop downbox
- Select Device from the drop down boxnext to it
- Click Refresh, this should bring back thetable fields
- Make Hostname the key field
- Select Hostname as the Display NameField
- Click the Save icon (floppy disk) and thenclose the tab
You should end up with something like the
picture on the top of the next page:
Create the Department DataType- Drop down the Data Sources And Types
menu
- Click the + icon next to CMDB
- Enter Department as the Data Type Name
- Make sure CMDB is selected as the DataSource Name
continued on p24
MessageBroker9:MessageBroker9 5/8/09 13:10 Page 28