data center and information security

25
Data Center and Information Security Securely Managing Internet-Connected Data Centers

Upload: datacenters

Post on 17-Jul-2015

857 views

Category:

Technology


1 download

TRANSCRIPT

Data Center and Information SecuritySecurely Managing Internet-Connected Data Centers

PrefaceThe world we live in today increasingly relies on computers and the Internet to function. The information

technology (IT) infrastructure needed to support it must run non-stop to enable trade between companies,

communication between people, and dissemination of information.

While the term data center may still evoke the image of an immaculate room housing a set of large

computers processing information, today's data centers are neither physically nor logically isolated from the

rest of the world.

The value of information is directly related to how quickly and easily the user can access it. The Internet's

ability to disseminate information provides tremendous benefits, but the tradeoff for this increased agility is

the risk of compromising security. Information security is the discipline that seeks to maintain information

availability, confidentiality and integrity. It has become one of the most critical areas in IT.

As the leading vendor of products for next-generation IT infrastructure management, Cyclades constantly

strives to offer products that can support your organization's security policies. Information security is so

important to us that it has become an integral part of our product design and company vision.

We have produced this booklet to help support and educate our partners and customers on the methods and

procedures used to maintain security in data center management. This booklet follows the highly successful

“Console Management in the Data Center” in its format, use of accessible language, and pragmatic approach.

We hope this volume in the Cyclades Technical Booklet Series will be useful to you.

Marcio SaitoChief Technology OfficerCyclades Corporation

© 2005 Cyclades Corporation, all rights reserved.The following are registered or registration-pending trademarks of Cyclades Corporation: Cyclades, AlterPath.All trademarks, trade names, logos and service marks referenced herein, even when not specifically marked as such, belong to their respectivecompanies and are not to be considered unprotected by law.

Information security and CycladesData center management is evolving to encompass a growing number of managed IT assets inheterogeneous environments dispersed over multiple locations. At the same time, IT budgetlimitations require the consolidation and more efficient use of support resources. As the ability toremotely access and manage equipment has become a top priority, information security hascorrespondingly become a major concern.

This booklet provides general concepts and objective information on data center security withoutfocusing on specific products. If you wish to know more about Cyclades solutions, please visitwww.cyclades.com or request a product catalog from your local Cyclades representative.

Cyclades Corporation offers a comprehensive system of products that reduce operational costsand risks, while increasing the productivity of IT assets and personnel. Cyclades AlterPath™System of out-of-band infrastructure (OOBI) products provides secure, alternate paths into theproduction IT infrastructure, enabling administrators to remotely access, diagnose and restoredisconnected assets to normal operation. Designed to seamlessly integrate and deploy into theenterprise, Cyclades AlterPath solutions include console servers, KVM and KVM over IP switches,power control appliances, service processor managers, blade managers and a manager to controlthe entire out-of-band infrastructure.

More than 8,000 organizations, including 85 percent of the FORTUNE 100, depend on Cycladesproducts. Founded in 1991 and headquartered in Fremont, California, Cyclades is a globalcompany with 19 offices worldwide.

Booklet organizationThis booklet is organized in short and objective sections to facilitate its use as a reference. The “KeySecurity Facts” boxes throughout the text summarize the main concepts explained in each section.

Chapter 1 focuses on general information and security concepts that should be of interest toanyone involved in the design, implementation, operation or support of a data center.

Chapter 2 explains the scope of use and functionality of several security-related protocols mostrelevant in data center management.

Chapter 3 is more specific on security in data center management applications and discusses someof the topics that have been relevant in product selection and the deployment of an out-of-bandinfrastructure (OOBI).

3

Table of ContentsInformation security and Cyclades 3

Booklet organization 3

Chapter 1: Information securityInformation security - What is it? 5

Security – Can it be purchased in a box? 5

Security policy – Why do I need it? 6

Security audits 6

Mechanics of protection – Layered security 7

User authentication 8

Data encryption 9

Network filtering and firewalls 10

Physical security 11

Log analysis, intrusion detection and event notification 12

Open source software and security 13

Chapter 2: Security-related protocolsServer-based user authentication protocols: RADIUS and TACACS 14

Network service authentication: Kerberos 15

User authentication with directory services: LDAP 16

Transport layer data encryption: SSL 17

Network tunneling: IPSec 17

Secure remote session: SSH 18

Chapter 3: Security and data center managementManagement and security – SSH is not enough 19

Consistent security policies 19

Security features 20

Console servers as SSH access appliances for router management 21

IPMI, iLO, blade management modules 21

The impact of new regulations in out-of-band infrastructure administration 22

Resources and contact information 23

References 23

Feedback and Cyclades support 23

4

Chapter 1: Information securityThis chapter presents general concepts in information security and explains how a consistent securitypolicy can be implemented in the data center.

Information security - What is it?Information security is the discipline that deals with the following three attributes:

Confidentiality is the assurance that information is shared only amongauthorized persons or organizations.

Integrity is the assurance that the information is authentic, complete and canbe relied upon to be sufficiently accurate for its purpose.

Availability is the assurance that the systems responsible for delivering, storingand processing information are accessible when needed by those who needthem.

Loss of any of the above attributes in a corporate data network characterizes aninformation security breach.

In an economy where organizations are highly dependent on the immediate availability of reliableinformation, security weakness frequently threatens the continued existence of the corporate entity. Itis imperative that IT professionals implement the technology and processes that mitigate that risk.

Security – Can it be purchased in a box?Unfortunately, security cannot be purchased as a turnkey solution. Information security in the datacenter depends on policies, discipline, tools and technology.

Maintaining information security can be compared tokeeping burglars out of a home: reliable doorlocks (good components) are not effective ifpeople forget to lock them properly beforeleaving the house (bad process). Lockingand checking all doors before leaving onvacation will not prevent someone frombreaking in if one of the windows hasflimsy locking mechanisms (a single weakcomponent can compromise an otherwisesound security system).

In order to maximize information security, we need tocreate a good architectural design, and then documentand follow the guidelines, procedures and rules that govern the data center (i.e.,implement a security policy), and select products and systems that include the features necessary tosupport those policies and architecture.

Key Security FactsInformation security has

the objective ofpreserving confidentiality,integrity and availability

of information in anorganization and is

achieved by combininggood components, goodarchitectural design, and

good practices in thedata center.

5

Security policy – Why do I need it?Defining a security policy may be a legal requirement in some industries.Additionally it shows an organized commitment to information security,fostering a strong reputation and trust by business partners. But the mainreason to define a policy is that it plays an important role in protecting yourinformation assets, which is strategic to the survival of the organization. Thereare four key elements of this policy:

The Philosophy is the approach towards information security, the guidingprinciples of the information security strategy. The security philosophy is a bigumbrella under which all other security mechanisms should fall.

The Strategy is a measurable plan detailing how the organization intends toachieve the objectives that are laid out, either implicitly or explicitly, within theframework of the philosophy.

Rules define the dos and the don'ts of information security, again within theframework of the philosophy.

Practices define the “how” of the organization's policy. They are a practicalguide regarding what to do and how to do it.

Implementing a security policy requires creating a supportive environment and promoting usereducation. The creation of a policy is normally driven by the power structures of the organization,motivated by a clear vision of the strategic importance of information security to the success of theenterprise. If that is not initially the case, changes to the structures and culture may be necessary sincean effective security policy must be implemented throughout the organization. Keeping these elementsin mind, we can now explore the practical implementation of a security policy.

Security auditsAfter devoting the effort to define good policies andimplementing consistent security processes andinfrastructure, many organizations are lulled into afalse sense of confidence.

Effective security is a dynamic processthat needs continuous maintenanceand improvement. How do we know forsure that your security has not beenbreached, or that the policies are beingfollowed? Are all software patches up todate? The only way to know is toconduct periodic security audits.

Just as financial audits are usually performed byindependent external firms, security audits arebetter conducted by a team that is independent fromthe people in charge of implementing the security policies.

Key Security FactsIn practical terms, asecurity policy is a

published set of documentslaying out the

organization's philosophy,strategy, rules and practiceswith regard to information

security. Implementingsecurity software orhardware without a

consistent policy in place isone of the most commonmistakes in data center

design.

6

Many existing security policy weaknesses can be assessed through careful and complete interviews andanalysis by a knowledgeable auditor, who documents whether a company is in compliance with itsstated policy. Recommendations arising from the auditing process will help to enhance an existingsecurity policy and its implementation.

Special attention is paid during audits to ensure compliance with various laws and governmentregulations related to privacy and confidentiality.

There are a growing number of tools becoming available to automate part of the auditing process,especially in the security assessment of the infrastructure, for automatic detection of software-relatedvulnerabilities. Automated vulnerability scanning and patch management tools do not replace auditors,but can complement and enhance a formal auditing process.

Mechanics of protection – Layered securityA secure data center design must start with the assumption that no piece ofsoftware or hardware can be trusted to be 100 percent secure. But, if thatassumption is true, how do we design a secure system?

A good general security principle is “defense in depth.” Avoid relying on a singleprotection method and deploy security in layers designed so that a hacker hasto defeat multiple defense mechanisms before completing a successful attack.

For example, packet filtering that blocks access from network addresses outsidethe organization keeps attackers out even if they have stolen access passwords.Data encryption can protect information confidentiality even when someone

breaks physical security and taps into communication wires.

Different versions of the same defensemechanism can also be layered. For example, an

attacker can neutralize a firewall by exploiting aknown design flaw. Deploying two firewalls in seriesusing distinct technologies makes it more difficult for someone to

penetrate the network.

User authentication may require not only a user name andpassword, but also the presentation of a token card and/or abiometric pattern (such as a fingerprint scan). Stealing a

password may be easy, but obtaining it and a physical device atthe same time is far more difficult.

A defense mechanism that is layered on top of another does notnecessarily have to be unbreakable. Sometimes, all that is requiredis to delay the attack and buy enough time so that the intrusion can be detected and diverted by other means.

Key Security FactsUser authentication, data

encryption, networktraffic control, physicalaccess control, logging

and auditing are some ofthe mechanisms availableto preserve information

security. Because nomechanism or devicealone can maintain

information integrity, it isimportant to deploy thesecurity apparatus in a

layered manner.

7

User authenticationUser authentication mechanisms are designed to uniquely identify users, assigntheir corresponding access rights to information, and track their activities. Thesemeasures also show users that security is taken seriously by the organization.

User IDs and strong passwords are the primary means of safeguardingorganizational assets. Authentication is usually performed at the access pointand consists of challenging the user to provide access keys (passwords,biometric information, tokens, ID cards, etc.) and checking their access privilegesagainst an authentication server (using authentication protocols such asRADIUS or LDAP).

Before authentication servers existed, user names and passwords had to bestored locally at every network access point. That created enormousmanagement overhead because any change to the user database had to bemanually replicated over a potentially large number of devices. It was also asource of security problems because password databases were less protected atthe edge of the network, and human mistakes (by systems administratorsupdating files manually) were common.

Authentication by user name/password is an example of single-factor authentication and can provideadequate protection for most enterprise applications if passwords aremanaged properly.

When stronger authenticationsecurity is required, two-factorauthentication can beimplemented with the additionof tokens (e.g., RSA SecurID®).A two-factor authenticationscheme requires users topresent “something you know”(password) and “somethingyou have” (the token). Onemust possess both at the sametime to gain access to thenetwork.

For ultimate authenticationsecurity, a biometric scan matchcan be required (fingerprint,retina, face recognition, etc.) as athird authentication factor. Withbiometric authentication, an intruder could obtain astolen password and token card, but still be unable togain authorized access without a biometric match.

Key Security FactsUser authentication doesnot have to be limited to

user names andpasswords. Authenticationcan be enhanced with theadoption of two and three

factor authenticationschemes using “somethingyou know” (a password),“something you have” (a

token card), and“something you are” (abiometric scan match).

8

Data encryptionData encryption is the process of encoding data (through a series ofmathematical functions) to prevent unauthorized parties from viewing ormodifying it. It has the objective to protect the confidentiality and integrity ofthe information when the encrypted data is in transit (such as over the Internet).

Use of data encryption has been reported as early as 1900 B.C. in ancient Egypt.People have been using codes of various complexity ever since to disguise allkinds of messages. Cryptography (the science of encryption) became a seriousissue when the telegraph was invented, and it was further developed duringWorld War II, when digital computers were invented to crack codes.

Data encryption works so that only the recipient can decipher the data using thedecoding algorithm and an encryption key that is known only to that person. Theencryption algorithm itself may or may not be secret.

Someone intercepting encrypted data cannot easily reverse the algorithm andretrieve the data. Data encryption not only protects data confidentiality, but alsocan be used to protect data integrity (the receiving party will know if the cipher data has beentampered with) and to certify the origin of the message being transmitted.

Until the 1960s, the right to create and break codes was thought to belong to the government, but inthe early 1970s, the National Institute of Standards and Technology (NIST) selected the DataEncryption Standard (DES) algorithm to serve as a common encryption standard, enabling thedevelopment of commercial applications. In 1977, RSA, an alternative to DES, was introduced as a"public key" encryption standard. Improving on DES's single key structure, RSA provided a two-keyPublic Key Infrastructure (PKI). A user generates both of these keys; one of them – the "public key" –is distributed openly, like a phone number, posted to an Internet Public Key Server. Anyone can use thispublic key to send encrypted e-mail to the key's owner, who then uses his or her second "private key"to decrypt the message.

Key Security FactsRelying on the algorithm'ssecrecy to guarantee the

encryption scheme'seffectiveness (security by

obscurity) is usually a weaksecurity practice. Data

encryption effectivenessshould be dependent solely

on the secrecy of theencryption keys. Data

encryption is the art ofhiding in plain sight.

9

Network filtering and firewallsNetwork filtering happens at the network protocol level and can be performedon routers and firewalls by analyzing headers of IP packets and allowing ordenying forwarding based on source or destination address, protocol type, TCPport number, packet length, etc.

By blocking packages based on network address information and protocol type,network filters can prevent unauthorized access even before an unauthorizeduser tries to authenticate or a hacker attempts to launch an attack.

Firewalls are devices that enforce access policies between two networks byperforming network packet filtering. In addition to looking at IP headers, mostfirewalls are also aware of data payload and can test application type andmessage content for patterns of traffic to deny/allow access.

For example, firewalls can be configured to allow only e-mail traffic throughthem thereby protecting the network against any attacks other than attacksagainst the e-mail service.

A firewall is also important as a single audit point. It provides important logging functions and canoften provide summaries to the administrator about what kinds and levels of traffic passed through it,how many attempts there were to break into it, etc.

While firewalls are important components of a security system, they cannot maintain security alone.Firewalls need to be part of a comprehensive set of security policies and are only one layer ofprotection to secure the perimeter of the network.

Firewalls cannot protect against attacks that do not go through them and are usually ineffective atprotecting against attacks launched from within the network. Firewalls are usually ineffective againstviruses and attacks launched through a tunneled protocol.

Key Security FactsFirewalls are

indispensable tools forenforcing security

policies at the perimeterof the network, but they

cannot deliver on thepromises of “security in a

box.” Without properpolicies in place andadditional layers of

security, firewalls mayinduce a false sense ofsecurity that puts theorganization at risk.

10

Physical securityPhysical access to the IT infrastructure is the most basic level of network security,but it is frequently forgotten. Many organizations are very concerned aboutproprietary data leaking out of the company through the network.Unfortunately for those concerned, a magnetic tape or a stolen hard drive canjust as effectively be used to export data.

That is not to say that logical security devices are not important - they are - it issimply vital to remember that such measures are only part of an overall securitystrategy. Security strategy should include physical security measures as well aslogical ones. Physical security is about limiting access to equipment for thepurposes of preventing tampering, theft, human error, and the subsequentdamage these actions cause.

Physical security measures may include placing servers and other associatedequipment in a separate room, away from the prying eyes and wandering fingersof overcurious staff.

If servers cannot be secured by lockable racks, they should be passwordprotected. Removing keyboards and mice is also a reasonable option. A safer andmore efficient approach is to have remote monitoring and remote notification inplace.

While unauthorized access may be easy to manage by careful server roomplacement and adequate security measures, authorized access brings its own challenges, such as whenvisiting contractors need access to the server room. In a utopianenvironment, it would be nice to think that the serverroom contained nothing but computer equipment,but the reality is that thereare likely to be telephonesystems, wiring closets, air-conditioners, fire detectionsystems, and a host of otherunits, many of which willrequire outside contractorsto maintain.

Physical access control usingbiometric authentication, videosurveillance cameras, monitoringof visitors by a member of the ITstaff, or even a good old-fashioned door lock may all bepart of the solution.

Key Security FactsThe most overlooked

means of compromisingdata confidentiality or

disrupting IT operations isto physically take ordestroy equipment.

Before spendingthousands of dollarssecuring your data

against threats comingfrom the network, makesure to control physicalaccess to servers and

network infrastructure. Apadlock may be your

most effectiveinformation security

investment.

11

Log analysis, intrusion detection and event notificationLogging, monitoring and event notification are facilities that do not obstruct intrusions, but promote

early detection of security breaches, identify the source of the attack, and enable faster recovery of lost

data. They can prevent an attack from succeeding and minimize the damage if a security breach occurs.

For example, software tools or network monitors can help by constantly monitoring network traffic

looking for suspicious patterns or by watching for unusual changes in configuration or system files on

a server. By providing early warning of what could be an attack, these tools can drastically minimize the

damage caused by an intrusion.

A network intrusion detection system monitors packets on the network and searches for patterns

indicating that an intruder is attempting to break into a system (or cause a denial of service attack). For

example, a system could associate an unusually large number of TCP connection requests to many

different ports on a target machine with an attempted TCP port scan. Such a system may run either on

the target machine that watches its own traffic, or on an independent machine aggressively monitoring

all network traffic.

A system integrity verifier monitors systems files to detect changes (for example, a hacker or program

modifying an operating system to create a back door). The Microsoft® Windows® registry and UNIX®

crontab files are other examples of data to be monitored.

Log file monitors can be utilized to analyze the log files generated by network and IT devices and

services, looking for patterns that indicate unusual activity (detecting a large number of failed logins

as an attempt to break a user account, for example).

12

Open source software and securityThere has been much public debate on the security merits (or weaknesses)

of open source software when compared to proprietary software. These

debates have frequently been driven by emotional arguments triggered by

the numerous high visibility worms and viruses that have exploited

vulnerabilities in desktop systems to disrupt networks.

Open source advocates have argued that open source software is

inherently more secure because fixes for uncovered design flaws are

quickly distributed and made available. They further argue that early and

intense code review promotes the development of better quality code.

The counter argument is that, while it may be true that bugs are fixed more quickly in Open Source,

a collaborative, community-based development model is not a replacement for a sound system

design.

Proprietary software advocates have argued that open source software is inherently less secure

because hackers have access to the software intended to protect system data and can easily find and

exploit existing design flaws.

While keeping secrets usually does not hurt security, secrecy of the source code or security methods

(security by obscurity) provides weak protection and therefore does not support the argument in

favor of proprietary software. Deploying a clearly visible, strong door lock can protect a house much

better than hiding an unlocked door behind a bush and hoping that burglars cannot break in

because they cannot find the door. An encryption method should protect the data even when its

algorithm is well-known.

As any security expert will attest, the security of a system is directly related to the quality of its

intrinsic design and the procedures governing its operation.

So, with regard to supporting security policies, components of your IT infrastructure should be

selected based on the soundness of their design, the commitment to security demonstrated by the

vendor, and the functionality required to support those policies, not the software development

model that produced it.

Key Security FactsOpen source software is not

inherently more or lesssecure than proprietary

code. Security of a softwaresystem results from thequality of its design, not

the availability of thesource code.

13

Chapter 2: Security-related protocolsThis chapter provides a basic explanation of the scope of use and internal functionality of several

security-related protocols commonly used to secure data center management, as well as suggestions

for additional information resources.

Server-based user authentication protocols: RADIUS and TACACSRADIUS was initially created by a company called Livingston and later became

an International Engineering Task Force (IETF) standard defining an

authentication protocol between network access points and an authentication

server.

Before authentication servers existed, user names and passwords had to be

stored locally at every network access point. That created enormous

management overhead because any change to the user database had to be

manually replicated over a potentially large number of devices. It was also a

source of security problems because password databases were less protected at

the edge of the network. Furthermore, mistakes by systems administrators

updating files manually were common.

The objective of the RADIUS protocol is to centralize storage of the user database and passwords,

which simplifies management, facilitates policy enforcement, and consequently enhances security. The

equipment receiving the access request uses the RADIUS protocol to authenticate the user against the

centralized authentication server.

TACACS (and its XTACACS and TACACS+ variants) is a protocol designed by Cisco Systems with similar

functionality and objectives as RADIUS. It was originally meant to be a Cisco proprietary protocol, but

support for TACACS has been added by many other network equipment vendors.

RADIUS has two software components: the client portion runs on the access equipment and the server

portion runs on the authentication server. When users request a network connection, the RADIUS

client requests the user name and password information. The client then initiates a login request

transaction with the server, which then checks the database and determines whether or not to allow

access. The transaction is encrypted so that someone watching the traffic between client and server

cannot easily retrieve the password information.

Initially designed only for authentication against a centralized database, RADIUS and TACACS servers

evolved to enable the enforcement of access policies (such as selective access to network resources by

different users, different access privileges dependent on the authorization level or time of day, etc.) and

to generate billing and auditing information (record of the user activity).

Key Security FactsUse of server-based

authentication using LDAP,RADIUS and otherprotocols not only

increases authenticationsecurity, but also helps to

enforce access policies andprovide logging and billinginformation that help todetermine the cause of

network problems.

14

RADIUS server software can be found both in commercial and open source form. An example of opensource server software that runs in most operating systems is the popular FreeRADIUS package foundat http://www.freeradius.org.

Network service authentication: KerberosKerberos (named after the three-headed dog guarding the entrance to Hades in Greek mythology) is anetwork authentication protocol developed at the Massachusetts Institute of Technology (MIT) thatbecame a standard implementation in UNIX systems and networking equipment. It is designed toprovide strong authentication for client/server applications by using secret-key cryptography.

In a typical modern IT infrastructure, geographically dispersed users request different services fromservers deployed in other locations. When a server receives a request from a user, it could trust theoriginating desktop machine to have properly authenticated the user before providing the servicerequested. That could be acceptable in a closely controlled environment, but is not reasonable in anopen network.

If the other machines cannot be trusted, a server should require a separate authentication processwhere users are forced to prove their identity and privileges every time before accessing the service.This process is inconvenient and generates a large number of authentication transactions on thenetwork, which can unnecessarily expose user names and passwords.

The objective of the Kerberos protocol is to minimize the exchange of user name and passwordinformation over the network when a user requests services from the network. This is accomplished bythe implementation of a third-party authentication service.

Kerberos keeps a database of the clients and users and the respective secret keys and encryptedpasswords. Both network services and clients/users wishing to use those services have to register withthe Kerberos server. Because it knows these private keys, Kerberos can generate messages that convinceeach party of the identification of the other party.

Kerberos also generates temporary private keys (or session keys, a.k.a., tickets) given only to the twoparties participating in a transaction, which is used to encrypt messages between them.

More information on Kerberos can be found at http://web.mit.edu/kerberos/www/

15

User authentication with directory services: LDAPA “directory” is like a database, but tends to contain more descriptive, attribute-based information. The information in a directory is generally read much moreoften than it is written. As a consequence, directories don't usually implementthe complicated transaction or rollback schemes regular databases use fordoing high-volume complex updates (making directories much faster for readaccess).

X.500 is the model for Directory Services as defined by the Open SystemInterconnections (OSI) (the same standard that defined the network referencemodel with seven layers). The model encompasses the overall name space andthe protocol for querying and updating it. The protocol is known as "DAP"(Directory Access Protocol), and runs over the OSI network protocol stack,which makes it complex and “heavyweight.”

The Lightweight Directory Access Protocol (LDAP) was developed at theUniversity of Michigan around 1992 to provide a simple protocol for access todatabase services. Because it runs directly on top of TCP/IP and lacks some ofthe most esoteric functions of X.500, it is comparatively “lightweight” and easier to implement in smallcomputer systems.

Over the years, LDAP has become a de facto standard for access to corporate directory services. UsingLDAP directories for the purpose of user authentication for network access enables furthercentralization of the user database compared to RADIUS because it allows the authentication directlyagainst the enterprise database.

LDAP was originally defined without encryption, meaning that the transaction over the network is notsecure against sniffing. Secure LDAP (SLDAP) implements data encryption based on the Secure SocketsLayer (SSL) protocol and provides more secure authentication.

An example of open source implementation of an LDAP server can be found athttp://www.openldap.org. LDAP services have been also incorporated by most operating systems,including several versions of UNIX, Linux®, and Windows.

Key Security FactsLDAP is becoming the defacto standard directory

access protocol and is alsobeing used for user

authentication in theenterprise network.

RADIUS and TACACS arethe most prevalent

authentication protocolsin networks today. Allthose protocols are

important for supportingsecurity policies today and

in the future.

16

Transport layer data encryption: SSLThe SSL protocol was originally developed by Netscape and has been universally accepted to secureInternet transactions. SSL is the base to IETF's Transport Layer Security (TSL) standards.

The Transmission Control Protocol/Internet Protocol (TCP/IP) governs the transport and routing of dataover the Internet. Other higher-level protocols (HTTP, IMAP, SMTP, etc.) use TCP/IP to support typicalapplication tasks such as displaying Web pages or running e-mail servers.

The SSL protocol is inserted between TCP/IP and higher-level protocols. It uses TCP/IP on behalf of thehigher-level protocols, and in the process allows an SSL-enabled server to authenticate itself to an SSL-enabled client, permits the client to authenticate itself to the server, and thus enables both machinesto establish an encrypted connection.

The SSL protocol supports the use of a variety of different cryptographic algorithms, or ciphers, for usein operations such as authenticating the server and client to each other, transmitting certificates, andestablishing session keys. The specific cipher to be used is negotiated between SSL client and serverduring the session handshake. Examples of ciphers are the Data Encryption Standard (DES), used by theUS government, Message-Digest algorithm 5 (MD5), RSA (a public key algorithm for bothauthentication and encryption), triple-DES (DES applied three times), etc. The most commonly used SSLcipher suites use RSA key exchange.

Because of the generic nature of its architecture, SSL can be used to secure sessions transported by theInternet for almost any application. Most UNIX systems rely on the OpenSSL library, which is used byseveral popular applications such as the Apache Web server and the OpenSSH remote sessionapplication.

However, SSL is not a transparent layer. It requires awareness and specific support from the higher-level protocol in order to provide its services. For example, the Apache Web server has to specificallyincorporate SSL support in order to offer secure HTTPS Web connections.

Information on the open implementation of SSL can be found at http://www.openssl.org.

Network tunneling: IPSecIPSec is a set of IP extensions developed by the IETF to provide security services compatible with theexisting IP standard (IPv.4) and also the upcoming one (IPv.6).

IPSec supports two encryption modes: Transport and Tunnel. Transport mode encrypts only the dataportion (payload) of each packet, but leaves the header untouched. The more secure Tunnel modeencrypts both the header and the payload. On the receiving side, an IPSec-compliant device decryptseach packet.

The sending and receiving devices share a public key. The key exchange happens through a protocolknown as Internet Security Association and Key Management Protocol, which allows the receiver toobtain a public key and authenticate the sender using digital certificates.

17

Unlike SSL (see previous), IPSec is implemented at the network level and is transparent to theapplication running above the transport layer. By transporting traffic through an encryptedtunnel over a public network, IPSec enables the construction of a virtual private network (VPN)using a shared infrastructure.

A VPN gives users a secure point-to-point link through the Internet or other public or privatenetworks without the expense of lease lines. It is a combination of tunneling, encryption,authentication, access control and auditing technologies/services used to transport traffic overan insecure network.

Secure remote session: SSHThe Secure Shell (SSH) is a protocol for secure remote login and othersecure network services over an insecure network and is standardized bythe IETF secsh working group. It is a replacement for remote sessionprograms such as Telnet and rlogin, providing a secure connection withstrong authentication and encryption to protect not only theauthentication process but also the session data.

It consists of three major components: Transport Layer Protocol, UserAuthentication Protocol, and Connection Protocol.

The Transport Layer Protocol provides server authentication,confidentiality, and integrity. It may optionally also provide compressionand will typically be run over a TCP/IP connection, but might also be usedon top of any other reliable data stream.

The User Authentication Protocol authenticates the client-side user to theserver. It runs over the transport layer protocol.

The Connection Protocol multiplexes the encrypted tunnel into severallogical channels. It runs over the user authentication protocol.

The client sends a service request once a secure transport layer connection has been established.A second service request is sent after user authentication is complete. This allows new protocolsto be defined and coexist with the protocols listed above.

The connection protocol provides channels that can be used for a wide range of purposes.Standard methods are provided for setting up secure interactive shell sessions and forforwarding ("tunneling") arbitrary TCP/IP ports and X11 connections.

There are two versions of SSH protocol: SSHv1 and SSHv2. They are very different protocols anddo not interoperate. SSHv2 is significantly more secure, including the use of stronger ciphers forencryption. SSHv1 has structural weaknesses which leave it potentially open to man-in-the-middle and other attacks. Because of its weaknesses and limitations, SSHv1 is obsolete andshould be avoided whenever possible. OpenSSH is the Open Source implementation of SSHdeveloped mostly by FreeBSD developers. It has become the most popular and widely adoptedimplementation of SSH in UNIX systems and can be found at http://www.openssh.org.

Key Security FactsSSH is a key protocol for

remote access to thenetwork infrastructure.

Make sure to enable onlythe more secure SSHv2version in your networkand ensure that the SSHprotocols running in your

systems are robust andup-to-date

implementations.

18

Chapter 3: Security and data center managementThis chapter is more specific on security in data center management applications and discusses someof the topics that have been relevant in product selection and deployment of an out-of-bandinfrastructure (OOBI).

Management and security – SSH is not enoughSecurity has become a key parameter for selecting OOBI components as itsability to manage large numbers of network and IT devices dispersed overmultiple locations becomes more important in minimizing data centerdowntime.

Capitalizing on that trend, some vendors market their products as secure basedon the support for a few security-related features (for example, many consoleservers are called secure just because they support SSH connections). That is incontrast to the time when most remote administration was done with Telnetthrough an insecure terminal server over a local area network.

As discussed earlier, the support for an isolated feature does not define a secureproduct. To be secure, a product must incorporate a complete, robust, andconsistent set of features needed to support security policies in the data center.

The software embedded into those products must be designed with security in mind; the code shouldbe tested and audited for a sound security design. Patch management should be implemented so thatnewly uncovered vulnerabilities are fixed promptly.

Most importantly, the product must be configured and installed so that it is secure within your specificenvironment. An OOBI component vendor should be able to provide guidelines and support forinstallation of its products within a secure environment.

Consistent security policiesIn a heterogeneous data center, there may be several different groups of people managing differentaspects of the operation. One group might be responsible for managing network equipment (routers,switches, etc.), another group for servers (UNIX, Windows, etc.), another group for the physicalinfrastructure (racks, power, cooling, etc.).

Frequently there are different levels of security requirements for those different groups. It is notuncommon to find environments where, for example, firewalls and software tools are deployed by thenetwork group to prevent, detect and mitigate the effects of denial of service attacks that could affectservice availability. At the same time, power control appliances providing power for the servers andKVM switches may be accessible through the network without any security protection.

Key Security FactsOut-of-band access

appliances provide remoteaccess to the managementports or emergency service

ports of servers andnetwork equipment in thedata center. If security is

breached at that point, notonly might data integritybe compromised but theentire data center might

be threatened.

19

It is important for the entire OOBI (console servers, KVM and KVM over IP switches, power controlappliances, service processor managers, blade managers and OOBI manager) to be integrated under thesame consistent security and management model. As discussed earlier, one isolated vulnerability or asingle weak component is enough to compromise the security of the entire system.

These requirements for integration and consolidation become even more important as the data centerevolves, with server consolidation, virtualization, and partitioning technologies blurring the boundariesbetween different systems and requiring the management of the IT infrastructure as a single,consistent system.

Security featuresAs mentioned earlier, when choosing OOBI components do not rely solely on a feature checklist.Remember that security-related features are necessary to implement security policies. The followinglist includes security features that should be considered.

User authentication: The OOBI components should support server-based authentication using theprotocols utilized in your data center today and in the future (RADIUS, TACACS, Kerberos, NIS, LDAP,Secure LDAP, etc.). Support for two-factor authentication (e.g., RSA SecurID) and three-factorauthentication (with support for biometric authentication) may also be important.

Packet filtering: While firewalls can effectively be used to enforce access policies based on IPaddressing at the perimeter of the network, they are ineffective for filtering internal traffic. Formanagement devices, you may want to impose more stringent access rules and management productsthat can do internal IP filtering to greatly increase security.

User and port access lists: Some products allow authenticated users to gain access to any managementport. If you are in an environment where system administrators have different scopes of responsibilityor privileges, you want to make sure the management product supports more sophisticated policies(certain users can access only certain ports, during certain times of the day, for specific operations, inread-only mode, etc.).

Data and event logging: Keeping records of console data and events (Who accessed which port? Whatdid they do?) can help to detect intrusions, prevent mistakes and diagnose problems after theoccurrence.

Data encryption: Both authentication transactions and data traffic must be encrypted when in transitover public or non-trusted networks. For example, using Telnet for console access exposes user nameand password information in clear text to anyone connected to the physical LAN. Use of SSHv2 for textsessions, HTTPS for web access and VPN protocols such as IPSEC can help mitigate that risk.

20

Console servers as SSH access appliances for router managementWith the increasing need for remote management, console servers have frequently been used as a SSHaccess appliance to manage routers and other network equipment.

To configure low-level parameters in routers, a system administrator needs to interact with theembedded networking Operating System using a command line interface. The problem is that manyrouters do not support SSHv2 for secure remote sessions.

That is the case with Cisco routers. They only support SSHv1 and even that feature is only available insome models with an expensive package upgrade. Using Telnet or SSHv1 over the network is not a secure way to manage theinfrastructure.

One solution is to disable network-based Telnet access and use a console serverto provide total or partial out-of-band access to the routers' console port.This offers the following benefits:

Management access is more secure, using a robust SSHv2 implementationinstead of weak Telnet or SSHv1.

There is only one SSHv2 access point to maintain and update, minimizing patchmanagement and reducing vulnerabilities.

Deployment costs are low, since a console server is less expensive than theupgrade packages to support SSH on most routers.

IPMI, iLO, blade management modulesAs security awareness has increased particularly related to the operational and cost benefits inherentin the OOBI, high-end servers and other hardware equipment vendors are starting to incorporatespecific hardware technologies that provide out-of-band management functions.

Examples include Intelligent Platform Management Interface (IPMI) and Integrated Lights Out (iLO)service processors and management modules in server blade systems.

IPMI is a proposed standard for communication with service processors (or baseboard managementmodules - BMC) to be embedded in every server motherboard. The BMC enables power control (theability to turn the power off and on remotely over the network) and low-level hardware monitoring(temperature, fan speed, voltages on the bus, physical intrusion, etc.). iLO is a proprietaryimplementation of the same concept in Compaq Proliant servers.

Newly announced server blades may be equipped with management modules that consolidate OOBIadministration within the blade chassis, enabling the monitoring and management of all server bladesas a single system.

Key Security FactsOut-of-band access

appliances addconvenience and control

over hardware devices, butfrequently compromisesecurity if not deployed

using established securitypractices and procedures.Insecure Web access, localstorage of passwords, and

the lack of encryptionsupport are the mostcommon problems.

21

Those technologies can potentially be very useful and extend the scope and importance of OOBIadministration. The problem is that they do not provide the security support for use in a secure datacenter. Either they cannot support server-based authentication (LDAP, RADIUS, etc.) and dataencryption (SSHv2, HTTPS, etc.) at all, or they depend on external software packages that are usuallyvendor specific and inadequate for the management requirements in a heterogeneous data center.

When taking advantage of the benefits provided by these new technologies, make sure to consider thesecurity implications, and when necessary, deploy the appropriate security measures that can providethe access control, logging capabilities and consolidation required to maintain a secure environment.

The impact of new regulations in out-of-band infrastructure administrationGovernment and legislators are starting to react to growing concerns by the public on how companieshandle and protect information and how security breaches can affect customers and consumers.

Several new laws and regulations require companies to comply with standardsfor data handling, data protection, event logging, unauthorized accessdisclosure, etc.

Companies in the pharmaceutical industry, for example, may be required tokeep detailed records of events in their computer systems used to develop newdrugs. Financial service companies are being required to better protect customerdata privacy. Publicly traded companies may be required to adhere to aminimum set of information security measures due to public security concerns.

While regulatory compliance should not be the main driver for the design andimplementation of a comprehensive security policy, they may affect it byadding new requirements for the data center infrastructure.

New company-wide policies might, for example, ban the transmission of username/passwords in clear text form, thereby prohibiting any OOBI productwithout SSH support. Centralization of user information may impose the need forLDAP-based authentication support in all networking equipment.

In order to track events in the network infrastructure, it may be necessary to keep a log of all consolemessages produced by servers and network equipment. This task would require the use of consoleservers and logging engines even in situations when OOBI administration was not previously justified.

Before starting the design or deployment of new OOBI systems, consult the teams managing securityand handling regulatory compliance in your organization. There may be requirements that affect yourproduct selection and architectural design.

Key Security FactsRegulatory complianceshould not be the main

driver for the design andimplementation of a

comprehensive securitypolicy, but they may add

requirements formanaging the data center.

Check with the teammanaging regulation

compliance beforeselecting components anddesigning your new OOBI.

22

Resources and contact information

ResourcesThe Computer Emergency Response Team (CERT) coordination center is a federally funded research anddevelopment center operated by Carnegie Mellon University. It is an expertise center on InternetSecurity and a well recognized source of information and vulnerability alerts. CERT Web sitehttp://www.cert.org.

References[1] Introduction to Security Policies, Charl van der Walt, http://www.securityfocus.com/infocus/1193

[2] Secure Programming for Linux and UNIX How to, David Wheeler,

http://www.dwheeler.com/secure-programs/

[3] OpenSSH Project, http://www.openssh.org

[4] OpenSSL Project, http://www.openssl.org

[5] Kerberos, MIT, http://web.mit.edu/kerberos/www/

Feedback and Cyclades supportPlease send comments on the content of this document to [email protected] comments and suggestions are welcome and appreciated.

For questions on Cyclades products or to request additional information, please visit our Web site athttp://www.cyclades.com or contact us at [email protected].

If you already own a Cyclades product and would like to explore the security features described in thisdocument, please consult the product documentation, visit the support section of the Cyclades Web siteor contact technical support at [email protected].

Cyclades offers templates for your security policy or best practices book that cover the configuration andinstallation of OOBI equipment in a secure environment. Those “security blueprint” documents can berequested from the company Web site.

Free subscriptions to the Cyclades user mailing lists that cover the notification of new vulnerabilitiesencountered in our products and the availability of the respective fixes can be obtained through thecompany Web site at http://www.cyclades.com.

23

Notes

www.cyclades.com