274340 david tan aik chuan computer networking and management
TRANSCRIPT
Award: Postgraduate Diploma in Strategic Business ITModule Title: Computer Networking and ManagementAssignment Title: Computer Networking and ManagementExamination Cycle: June 2009Candidate Name: David Tan Aik ChuanNCC EducationCandidate No: 274340Submission Date:
Marker’s comments:
Moderator’s comments:
Mark: Moderated Final Mark Mark
i
Statement and Confirmation of Own Work
Programme/Qualification name:
Student declaration
I have read and understood NCC Education’s policy on Academic Dishonesty and Plagiarism.
I can confirm the following details:
Student ID/Registration number: 274340
Name: David Tan Aik Chuan
Centre Name: Raffles Education Corp. (Singapore)
Module Name: Computer Networking and Management
Module Leader: Ivan
Number of words: 7,110
I confirm that this is my own work and that I have not plagiarized any part of it. I have also noted the assessment criteria and pass mark for assignments.
Due date:
Student Signature:
Submitted Date:
ii
Computer Networking and Management
Table of Contents
TASK 1. THE INTERNET INFRASTRUCTURE AT OPUS IT SERVICES PTE., LTD. 11.1. Introduction............................................................................................................................1
1.2. The internet infrastructure at Opus IT Services................................................................2
Figure 1.1. Networking server and client operating system environment.........................2
1.3. Connectivity Approach.........................................................................................................3
Figure 1.2. Connectivity of Opus system...............................................................................1
1.4. Required security systems in a typical internet setup......................................................1
TASK 2. NETWORK MANAGEMENT TECHNOLOGIES 22.1. Introduction............................................................................................................................2
2.2. Functions................................................................................................................................2Figure 2.1. TCP/IP stack on two hosts connected via two routers and the layers used at each hop................................................................................................................................5
Figure 2.2. Encapsulation of application data moving through the protocol stack..........5
2.3. Examples of network faults detected by network management technologies..............5
Figure 2.3.Fault tolerance using clustering...........................................................................6
TASK 3. INTERNET CONTROL TECHNIQUES 83.1. Introduction............................................................................................................................8
3.2. Time to derive an AES key using key space search........................................................8
3.3. Comparative advantages and disadvantages of public key and private key cryptography..................................................................................................................................8
3.4. How to compute and verify digital signatures of long messages.................................10
Figure 3.1. Message digest computation process.............................................................11
Figure 3.2. Message Digest Encryption...............................................................................11
Figure 3.3. Digital Signature - Process at Sender’s and Receiver’s End........................13
3.5. How to compute and verify MACs of long messages....................................................14
Figure 3.4. Sending messages using MAB algorithm........................................................15
3.6. Comparison of digital signatures and message authentication codes.........................15
iii
TASK 4. TCP ALGORITHMS 174.1. Introduction..........................................................................................................................17
4.2. Operation of the elements of a TCP congestion control algorithm..............................17
4.3. Comparison of various TCP algorithms...........................................................................20
TASK 5. THE WEB CACHE 245.1. Introduction..........................................................................................................................24
5.2. Functions of an HTTP proxy..............................................................................................24
5.3. Reasons for the increase in the use of HTTP proxies...................................................25
5.4. Effectiveness of HTTP proxies..........................................................................................26
6. REFERENCES 27
7. Bibliography 29
iv
TASK 1. THE INTERNET INFRASTRUCTURE AT OPUS IT SERVICES PTE., LTD.
a. Undertake an investigation of an Internet infrastructure in your workplace or college. Produce a report of your findings, including appropriate screen dumps, with reference to the following:
i. Networking server and client operating system environment ii. Hardware, NIC, HUBS/SWITCHES and other network appliances iii. THREE internet enabled client applications available at the organisation
b. For the networked server and clients investigated in a) above, report in detail with a specially created diagram using MS-Visio or any Open source diagramming software, the following:
i. Connectivity to the outside world (LAN, WAN, VPN, ADSL, WIFI etc.) ii. IP addressing logic for the networked machines
c. Define the hardware/software components compulsorily required for maximum security in a typical internet setup. Investigate the components available/not available from a real scenario and produce an executive report with your comments and recommendations.
1.1. Introduction
This task describes the internet infrastructure at Opus IT Services PTE., Ltd. (Opus), an IT
service company in Singapore. This section describes the servers, operating systems, and
hardware maintained in the company; and how the infrastructure connects to the outside
world, including security measures adopted to make the entire system free from malicious
attacks.
Opus provides its clients with the following services:
Database solutions cover database infrastructure to its clients.
Process/methodology is a service that advises clients to ensure that processes adhere to
standard and consistent processes.
Outsourcing is a helpdesk, call centre and on-site support service provided to clients.
Flexi-IT SERVICES is a telephone and onsite service support plan offered to clients to assist
them in maintaining IT hardware and software on a need-to-activate basis.
1
OPUS Flexi-IT Services is a token-based, multi-usage service plan where clients can call for
help in answering such questions as:
• How-to’s on desktops, server and network,
• Virus eradication and protection,
• Technical and usage questions,
• Operating systems questions, and
• Isolating problems related to computer hardware, peripherals, local area network
media/connection and software
1.2. The internet infrastructure at Opus IT Services
To be able to provide all its services, Opus maintains servers that are reserved for satisfying
the computing needs of its clients. Attached to these servers are devices that are specifically
configured to perform the services offered by Opus.
The networking server and client operating system environment is graphically described in
Figure 1.1 below.
Figure 1.1. Networking server and client operating system environment
Devices used in the Opus system
The Opus system is composed of a CCS C3560 01 core switch, a CCS C2950 01 edge
switch and a VLAN 210 ITE agent.
2
The Cisco Catalyst 3560 01 core switch is a fixed-configuration access layer switch for
branch-office environments, combining both 10/100/1000 and PoE configurations for IP
telephony, wireless access, video surveillance, building management systems, and remote
video.
Customers are able to deploy network-wide intelligent services such as advanced quality of
service (QoS), rate limiting, access control lists (ACLs), multicast management, and high-
performance IP routing.
The Cisco Catalyst 2950 01 edge switch, on the other hand, provides fast ethernet and
gigabit ethernet connectivity. This has two sets of software and configuration that allows
small, mid-sized, and enterprise branch offices, and industrial environments to select the
right combination for the network edge. It uses the Standard Image (SI) Software for basic
data, video and voice services.
The Enhanced Image (EI) Software is used for rate limiting and security filtering.
The Cisco Cluster Management Suite (CMS) Software, allows users to simultaneously
configure and troubleshoot multiple desktop switches using a standard web browser.
Internet-enabled client applications available at Opus
Three Internet-enabled client applications that is offered by Opus include helpdesk, call
centre and on-site support service.
1.3. Connectivity Approach
The connectivity of the Opus system to the outside world is through LANs and gateways,
shown graphically in Figure 1.2, as follows:
3
Figure 1.2. Connectivity of Opus system
IP addressing logic for the networked machines
The IP addressing logic used in the Opus system follows subnetting to allow the creation of multiple logical networks that exist within the Class A network. This approach enables Opus to use more than one network in the system.
The system also uses Classless Interdomain Routing (CIDR) to improve address space utilization and routing scalability in the Internet.
1.4. Required security systems in a typical internet setup
The internet set up at Opus complies with the following conditions required for computer
security:
1
1. Regulatory Compliance
To prevent data security breaches, Opus conducts periodic gap analyses of the current
status of security measures in place and compares them to best practices. Then,
improvements are designed and implemented.
2. Network Availability and Performance
Opus continuously optimizes network performance through continuing policy refinements
and system design improvements, and using products that block bandwidth-hugging traffic.
3. Attack Prevention
Internet attacks such as unauthorized access, alteration or theft of data, worm and virus
infiltrations, and denial-of-service attacks continuously plague the system Also common are
unauthorized grade changes, data tampering, persistent problems with critical systems and
theft of sensitive information. Opus has implemented measures to stops these attacks.
These measures include use of virus protection software, and periodic changes in
passwords. Ethical hacking is also conducted periodically to identify weak areas in the
system and enable the design and implementation of remedial measures.
4. Security
Opus uses a methodology to analyze and prioritize risks. This methodology allows the
scheduling of implementation of security measures in order of importance.
Opus has adopted the following measures to minimize the risk of attacks:
Method of attack
Risk Ways of minimizing risk
1. Network packet sniffers
Can get critical system information, such as user account information and passwords.
When an attacker
Use of trusted networks
When setting up the firewall server, Opus has identified the type of networks that are attached to the firewall server through network adapter cards.
The trusted networks cover the firewall server and
1
obtains the correct account information, an attacker gains access to a system-level user account, which the attacker uses to create a new account that can be used at any time as a back door to get into a network and its resources.
all networks after it.
Virtual private networks (VPNs) however, are also considered as trusted networks.
The packets that start on a VPN are considered to from the internal perimeter network.
For communications that originate on a VPN, security mechanisms allow the firewall server to authenticate the origin, data integrity, and other security measures enforced on trusted networks.
Untrusted Networks
Untrusted networks are networks that are known to be outside Opus’ security perimeter.
When setting up the Opus firewall server, we identified the untrusted networks from which that firewall can accept requests.
Unknown Networks
Unknown networks are unknown to the firewall because we cannot tell the firewall server that the network is a trusted or an untrusted network. The firewall therefore applies its set security policy.
Establishing a Security Perimeter
A critical part of Opus’ overall security solution is a network firewall, which monitors traffic crossing network perimeters and imposes restrictions according to security policy.
Perimeter routers are found at the network boundary, such as between private networks, intranets, extranets, or the Internet. Firewalls separate internal and external networks.
The Opus network security policy focuses on controlling the network traffic and usage. It identifies its resources and threats, defines use and responsibilities, and executes actions when the security policy is violated.
2
Perimeter Networks
Opus has designated the networks of computers that are to be protected and defined the network security mechanisms that protect them. Thus the firewall server serves as the gateway for communications between trusted networks and untrusted and unknown networks.
2. IP spoofing Can provide access to user accounts and passwords. An attacker can then emulate one internal user to send e-mail messages to business partners that appear to have originated from someone within your organization.
The following measures have been done:
Filtering at the Router - Implementing ingress and egress filtering on routers. An access control list blocks unwanted private IP addresses.
The filter has been designed not to accept addresses from within the network
On the upstream interface, source addresses outside the network are restricted thereby preventing someone on Opus network from sending spoofed traffic to the Internet.
Encryption and Authentication – Use of encryption and authentication reduces spoofing. This is done in a secure channel. (Matthew Tanase, March 11, 2003)
3. Password attacks
Can provide access to accounts that can be used to modify critical network files and services. An example is an attacker modifying the routing tables for your network. By doing so, the attacker ensures that all network packets are routed to his computer before they are transmitted to their final destination.
Opus has adopted the following measures to minimize the risk of password attacks:
Use of passphrases
Passphrases are something that one always remember, either a quote, favorite sentence or a combination of both numbers and special characters. Both passwords and passphrases are logged through the use of a keylogger.
Use of biometrics
The biometric systems applied cover fingerprint system and voice recognition. (Dancho Danchev, January 7, 2005)
3
4. Denial-of-service attacks
These attacks make a service unavailable for normal use, which is accomplished by exhausting some resource limitation on the network or within an operating system or application.
The motivation for DoS attacks is not to break into a system. Instead, it is to deny the use of the system to others who need its services. DoS attacks do the following:
- Crash the system. - Deny communication.- Bring the system down or have it operate at a reduced speed. - Hang the system.
Protection measures against attacks
Amplifier Configuration. The routers are configured so that it does not forward directed broadcasts onto networks. All broadcast are disabled on all routers. This ensures that employees on the internal network won't be able to launch attacks. A firewall gives additional security.
Configuration of server operating systems. Servers are configured so that they will not respond to a directed broadcast request.
Network connectivity attacks
These attacks overload the victim so that its TCP/IP stack is not able to handle any further connections, and processing queues are completely full with nonsense malicious packets. Legitimate connections are thus denied.
To protect the network against connectivity attacks, the following has been done:
- Decreased the TCP Connection Timeout on the Opus server. - Used a firewall at the perimeter which works as an intermediary in forwarding the connections to the server. (Abhishek Singh, CISSP December 14, 2005)
1
TASK 2. NETWORK MANAGEMENT TECHNOLOGIES
a. Explain the function of the following in the context of network management:
• A managing entity • A managed device • A managed object • A management information base • A network management protocol
b. Outline two examples of network faults which might be detected using network management technologies.
2.1. Introduction
There are technologies and tools available for network management functions. There is,
however, no single solution available to address all the following network management areas:
1. network device and application fault management,
2. network device and application configuration management,
3. network utilization and accounting management,
4. network performance management, and
5. security management.
(Networkdictionary.com, Network Management Technologies, accessed April 20, 2009)
http://www.networkdictionary.com/networking/NetworkManagementTechnologies.php
This task describes the various network management functions, and examples of network
faults which might be detected by network management technologies.
2.2. Functions
The following are the functions of a managing entity, a managed device, a managed object, a
management information base (MIV), and a network management protocol.
1. Managing entity
A network managing entity refers to the hardware that monitors the operation of networked
systems. This entity, usually a server, keeps the network (and the services that the
network provides) operating smoothly. It also monitors the network to spot problems ideally
2
before users are affected. (A. Clemm, A., Network Management Fundamentals,
CiscoPress, 2006)
2. Managed device
A managed device is a network node that contains an simple network protocol (SNMP)
agent and that resides on a managed network. Managed devices gather and save
information and make this available to network management systems (NMSs) using
SNMP.
Managed devices, sometimes called network elements, can be any type of device such as
routers, access servers, switches, bridges, hubs, IP telephones, computer hosts, and
printers (Wikepedia, Simple Network management Protocol, accessed April 20, 2009).
Centralized management systems running on servers gather information from managed
devices and store these information in a database called management information base.
The database can be accessed to provide statistics on the performance of the network.
Earlier versions of SNMP agents had to be polled for information, increasing traffic by just
getting traffic flow information. Because of this, SMNP2 was developed to allow the
management of devices across networks that did not run TCP/IP, to automatically report
alarms and provide security for its transmissions.
SNMP2 offers management system authentication, encryption of management information
and the ability to allow more than one agent per device. Because it is more secure, devices
can be monitored and configured remotely.
Protocol Advantages Disadvantages
SNMP Part of TCP/IP
Already implemented
Too much network traffic due to polling
Supports only TCP/IP
No security
SNMP2 Supports other
protocols Secure
Never fully implemented
3
Allows remote configuration
4
Protocol Advantages Disadvantages
Updated SNMP Easier to implement than SNMP2 due to the removal of security features
No security features
No remote configuration
SNMP3 Has the security of SNMP2 Not supported by
standards
Manufacturers are offering their own solutions
3. Managed object
In a network, a managed object is an abstract representation of network resources that are
managed.
A managed object represents a physical entity, a network service, or an abstraction of a
resource that exists independently of its use in management (Wikipedia, managed Object,
accessed April 20, 2009).
4. Management information base
A management information base (MIB) is the database used to manage the devices in a
network. It is a collection of objects in a database used to manage entities such as routers
and switches.
Databases are hierarchical (tree-structured) and entries are addressed through object
identifiers.
SNMP uses MIBs. Components controlled by the management console need an SNMP
agent - a software module that can communicate with the SNMP manager.
Examples of MIB objects include:
output queue length, which has the name ifOutQLen,
Address Translation table (like ARP tables) called atTable (Wikipedia, Management
Information Base, accessed April 20, 2009)
5
5. Network management protocol
The Internet Protocol Suite, also known as TCP/IP is the set of communications protocols
used for the Internet and other similar networks.
The Internet Protocol may be viewed as a set of layers. Each layer solves problems in the
transmission of data, and provides serves the upper layer protocols based on using
services from some lower layers.
Upper layer protocols are logically closer to the user and deal with more abstract data,
relying on lower layer protocols to translate data into forms that can eventually be
physically transmitted.
In general, an application uses a set of protocols to send its data down the layers, being
further encapsulated at each level (Wikipedia, Internet Protocol Suite, accessed April 20,
2009).
Figures 2.1 and 2.2 are examples showing two Internet host computers communicating
across local network boundaries constituted by their internetworking gateways (routers).
Figure 2.1. TCP/IP stack on two hosts connected via two routers and the layers
used at each hop
Figure 2.2. Encapsulation of application data moving through the protocol stack.
6
2.3. Examples of network faults detected by network management technologies
Faults can occur either in the source of the content, such as an encoder or digital media
library or distribution of content to client, such as faults in distribution servers or cache/proxy
servers.
To minimize the risk of faults, the system should be made redundant. If this is not done, the
media distribution process is vulnerable to failure.
1. Downstream faults
Failure of one component, such as a distribution server, can prevent clients from receiving
the content they requested. Using multiple servers to stream the same content, called
clustering, reduces the risk of interrupted service.
Clustering is a fault tolerance technique because reduced capacity or failure of any one
server is unlikely to interrupt the whole system. If one server stops, the workload of the
failed server can be transferred to the other server, as shown in Figure 2.3 below.
Figure 2.3.Fault tolerance using clustering
2. Security faults
Access to the system by unauthorized persons, can damage the content and the system.
Some content may not have value but may contain sensitive information that must be
7
protected against theft. Locked doors and network firewalls may not be adequate to keep
out an intruder.
The security of a system and content is dependent upon two things: (1) the physical
security of the system hardware and storage and (2) the virtual security of your network.
Physical security. All critical media storage and hardware components should be
housed in a room that has been dedicated to that purpose. Only persons directly
involved with the operation of the system should have access. Additional permissions
such as card key readers, combination locks, alarm systems, and closed circuit video
depend on the value of your data and your overall security strategy.
Network security. Network security considers the following aspects:
- Authentication. Persons requesting access must have their credentials verified. This
process usually involves logging a name and password.
- Authorization. After identity has been established, it must meet certain criteria before
the requestor can gain access to the restricted content.
- Permissions. Each permitted user will have a specific set of permissions which allow
performing certain functions and may prohibit the user from performing others.
- Firewalls. Firewalls prevent access to specific, closely-monitored ports. The firewall
can also prevent the type of information that can pass through the ports. Firewalls are
used to separate a proprietary network from the Internet, but they can also be used to
provide strict security within a network (Microsoft Technet, Windows Media Services
Deployment Guide).
8
TASK 3. INTERNET CONTROL TECHNIQUES
a. Assuming that you have access to 10,000 up-to-date PCs and stating any other reasonable assumptions you need to make, calculate how long it will take, on average, to derive an AES key using a key space search (sometimes referred to as a brute force attack).
b. Explain the comparative advantages and disadvantages of public key and private key cryptography. For each cryptography type: name and give outline information for ONE common cipher (excluding AES).
c. Describe the sequence of operations which must be undertaken to compute and then verify the digital signature of a long message.
d. Describe the sequence of operations which must be undertaken to compute and then verify the message authentication code (MAC) of a long message.
e. Compare and contrast digital signatures and MACs.
3.1. Introduction
This task discusses how long it takes to derive an AES key using key space search. It also
shows the advantages and disadvantages of public key and private key cryptography, the
sequence of operations to compute and then verify the digital signature and the message
authentication code of a long message.
3.2. Time to derive an AES key using key space search
The time it takes to break a 128-bit AES key is about 1013 years. To check all the 2128
(340,282,366,920,938,463,463,374,607,431,768,211,456) possibilities, a device that could
check a billion billion keys (1018) per second would need about 1013 years to exhaust the key
space (Wikipedia, Brute force Attack, accessed April 20, 2009).
3.3. Comparative advantages and disadvantages of public key and private key cryptography
The technique used in public key-private key cryptography is the use of asymmetric key
algorithms because the key used to encrypt a message is not the same as the key used to
decrypt it. Each user has a two cryptographic keys - a public key and a private key. The
private key is confidential, while the public key is publicly known. The recipient's public key is
used to encrypt messages and can only be decrypted with the corresponding private key.
9
In contrast, symmetric key algorithms use a single secret key shared by sender and receiver
for both encryption and decryption. To use symmetric encryption, the sender and receiver
must share a secret key in advance.
1. Description
The two main branches of public key cryptography are:
Public key encryption — a message encrypted with a recipient's public key cannot be
decrypted by anyone except a possessor of the matching private key. This is used for
secrecy.
Digital signatures — a message signed with a sender's private key can be verified by
anyone who knows the sender's public key, proving that the sender had access to the
private key and the message was not tampered.
Public-key encryption can be likened to a locked mailbox with a mail slot. The mail slot is
accessible to all; its location (the street address) is the public key. Anyone who knows the
address can go to the gate and drop a message through the slot. However, only the person
who has a correct key can open the mailbox and read the message.
Digital signatures can be likened to the sealing of an envelope with a personal wax seal.
The message can be accessed by anyone, but the seal proves its authenticity.
A problem of the use of public-key cryptography is confidence or proof that a public key is
correct, belongs to the person or entity claimed, and has not been tampered or replaced by
a third party. The approach to this problem is to use a public-key infrastructure (PKI),
where one or more third parties, known as certificate authorities, certify ownership of key
pairs. Another approach, is the "web of trust" method to ensure authenticity of key pairs.
All known public key techniques are more computationally intensive than their secret-key
counterparts, but can be made fast enough for a wide variety of applications.
For digital signatures, the sender hashes the message and then signs the resulting "hash
value". To verify the signature, the recipient computes the hash of the message, and
compares this hash value with the signed hash value to prove it was not tampered.
10
2. Security
The application of a public key encryption system is confidentiality; a message which a
sender encrypts using the recipient's public key can be decrypted only by the recipient's
paired private key.
To authenticate, and maintain confidentiality, the sender first signs the message using his
private key, then encrypts the message and signature using the recipient's public key.
3. Weaknesses
Among symmetric key encryption algorithms, only the one-time pad can be proven to be
secure against any attacker, no matter how much computing power is available.
Unfortunately, all public-key schemes are susceptible to brute force key search attack.
These insecurities can be avoided by choosing large key sizes so that it will takes a long
time to break the code.
Another security vulnerability in using asymmetric keys is the possibility of a man in the
middle attack, where communication of public keys is intercepted by a third party and
modified to provide different public keys instead. This attack may appear to be difficult to
implement in practice, but it's not impossible when using insecure media such as public
networks or wireless communications.
One way to prevent such attacks is to use of a certificate authority, a trusted third party
responsible for verifying the identity of a user of the system and issuing a tamper resistant
and non-spoofable digital certificate for participants (Wikipedia, Public Key Cryptography),
3.4. How to compute and verify digital signatures of long messages
It is important to understand the meaning of various terms before discussing how to compute
and verify digital signatures.
A message digest, or hash, is a value obtained on a message. This message digest value is
guaranteed to be the same for the same message. If the message is changed the message
digest would likewise change. Because of this, message digests can be used to check if a
message has not been changed. However, there are issues. These are:
11
1. An attacker can change both the original message and the computed message digest.
Therefore, the receiver has no way of knowing if the original message and the message
digest have not been changed.
2. A message digest does not prove that the message was sent by the sender and not by
somebody else. So, if a bank receives an instruction to transfer USD1,000 from Account A
to Account B, the bank has no way of knowing if this instruction is genuine.
Two issues need to be addressed: message integrity and non-repudiation.
To solve the weakness of a hash, digital signatures can be used to guarantee the validity of
message integrity and non-repudiation using the computation process shown in Figure 3.1
below.
Figure 3.1. Message digest computation process
We know that the main problem in this scheme is that the attacker can easily alter the original
message and rerun the same message digest algorithm on the altered message. This can
lead to a modified message digest, thus making it difficult to catch the attacker.
To prevent others from modifying messages, we can encrypt the message digest, as shown in
Figure 3.2 below.
Figure 3.2. Message Digest Encryption
The figure above indicates that the message digest must be encrypted before it is sent to the
receiver. The receiver would reject the message if a message digest received is not
encrypted. The diagram shows that:
12
a. A genuine sender is able to perform this encryption operation, and a genuine receiver is
able to verify this encryption; and
b. It would be difficult for an attacker to encrypt.
While an attacker would still be able to compute the message digest, he must not be able to
encrypt the message digest.
The use of public key cryptography, also called as asymmetric key cryptography, is used for
this purpose.
The idea is for the sender and only the sender knows a private key, which can be used to
encrypt the message digest to produce the output as shown in the earlier diagram. The
receiver and no one else knows the sender’s public key and use it to decrypt the message
digest successfully.
Thus:
a. The sender would encrypt the message digest with a private key. The sender must keep
the private key secret.
b. The output is called as the digital signature for this particular message.
c. The sender sends the message and the digital signature to the receiver.
d. The receiver verifies the digital signature using the sender’s public key, which is publicly
known. This should give the receiver a message digest, say MD-1.
e. The receiver also computes a new message digest on the original message, say MD-2.
If MD-1 = MD-2, we achieve both message integrity (message has not been tampered with, because the attacker does not know the sender’s private key) and non-repudiation (the message is proven to be sent by the sender, since only he knows the private key corresponding to this public key) (Indicthreads.com, What are digital signatures)
Figure 3.3 shows this.
13
Figure 3.3. Digital Signature - Process at Sender’s and Receiver’s End
The following is a sample program in Java, which performs and verifies digital signatures.
// Compute and Verify a Digital Signature// Written by Atul Kahate
import java.security.*;
public class DigitalSignatureExample {
public static final String str = "This is the message to be digitally signed. ";
public static void main(String[] args) throws Exception{
// Generate a RSA key pair System.out.println ("Attempting to generate a key pair ...");
KeyPairGenerator kpg = KeyPairGenerator.getInstance ("RSA"); kpg.initialize (1024); KeyPair
kp = kpg.genKeyPair (); System.out.println ("Key pair generated successfully ...");
// Sign data byte [] ba = str.getBytes("UTF8"); Signature sig = Signature.getInstance
("MD5withRSA"); sig.initSign (kp.getPrivate()); sig.update (ba);
byte [] signedData = sig.sign ();
// Display plain text and signature System.out.println ("Original plain text was : " + str);
System.out.println ("Signature is : " + new String (signedData));
System.out.println ("=== Now trying to verify signature ===");
14
// Now verify the signature sig.initVerify (kp.getPublic()); sig.update (ba);
boolean isSignOk = sig.verify (signedData); System.out.println ("Signature verification results
are: " + isSignOk); }} (Atul Kahate, July 11, 2007).
3.5. How to compute and verify MACs of long messages
In cryptography, a message authentication code (MAC) is a short piece of information used
to authenticate a message.
A MAC algorithm, sometimes called a keyed hash function, accepts a secret key and a
message to be authenticated, and outputs a MAC, or called a tag. The MAC value ensures
integrity and authenticity for messages by allowing verifiers to detect any changes.
While MACs are similar to hash functions, they have different security requirements. To be
secure, a MAC function must be able to resist plaintext attacks. This means that even if an
attacker has access to a secret key and generates MACs for messages, he cannot guess the
MAC for messages that has not yet been asked.
MACs differ from digital signatures, as MAC values are generated and verified using the same
secret key. This implies that the sender and receiver of a message must have keys before
initiating communications, as in the case with symmetric encryption. Furthermore, MACs
cannot provide non-repudiation offered by signatures. Any user who can verify a MAC can
also generate MACs for other messages.
On the other hand, a digital signature is generated using the private key of a key pair, which is
asymmetric encryption. Since this private key is only accessible to its holder, a digital
signature proves that a document is in fact signed by that holder. Because of this, digital
signatures offer non-repudiation (Wikipedia, Message Authentication Code, accessed April
20. 2009).
Example is shown in Figure 3.4 below
15
Figure 3.4. Sending messages using MAB algorithm
In this example, a sender runs a message through a MAC algorithm to produce a MAC data
tag. The message and the MAC tag are then sent to the receiver.
The receiver then opens the message portion of the transmission through the same MAC
algorithm using the same key, producing a second MAC data tag. The receiver then
compares the first MAC tag received to the second MAC tag. If they are identical, the receiver
can assume that the message has integrity, and the message was not altered.
3.6. Comparison of digital signatures and message authentication codes
A digital signature is generated using the private key of a key pair, which is asymmetric encryption.
Since this private key is only known to its holder, a digital signature proves that a document
was in fact signed that holder. Thus, digital signatures provide non-repudiation properties.
Message authentication codes (MAC) values, on the other hand, are generated and verified
using the same secret key.
This means that the sender and receiver of a message must agree on keys before initiating
communications, as in the case with symmetric encryption.
16
Compared to digital signatures, MACs do not provide the property of non-repudiation offered
by digital signatures. Any user who can verify a MAC is also capable of generating MACs for
other messages (Wikipedia, Message Authentication Code, accessed April 20, 2009).
17
TASK 4. TCP ALGORITHMS
In the context of TCP congestion control:
a. Explain the operation of the THREE elements which make up the TCP congestion control algorithm.
b. With the aid of diagrams where appropriate, compare and contrast the Tahoe, Reno and Vegas TCP algorithms.
4.1. Introduction
This task discusses TCP congestion control and focuses on three algorithms, namely TCP
Tahoe, Reno and Vegas.
4.2. Operation of the elements of a TCP congestion control algorithm
Early TCPs had weakness. A major weakness is their inability to connect data flow rates with
congestion in the network. The issues then were how to detect congestion, and how to make
flow rate relate to congestion level.
To address these issues, Van Jacobson, former Cisco chief Technical Officer, introduced
TCP Tahoe in 1988-1989. Van Jacobson focused on congestion control as a means for
addressing the problem of congestion in networks.
There are three elements of congestion control in TCP Tahoe. These are:
Additive increase, multiplicative decrease
Slow start
Fast retransmit.
1. Additive increase, multiplicative decrease
Each time a packet drop occurs, window size is slashed in half. A multiplicative decrease is
executed. Congestion is avoided using multiplicative decrease.
When no losses are observed, window size is increased gradually. An additive increase is
executed.
18
The algorithm for additive increase and multiplicative de crease can be described as
follows:
Increment congestion window by one packet per RTT (linear increase)
Divide congestion window by two whenever a timeout occurs
(multiplicative decrease)
In practice, increment a little for each ACK. Increment = (MSS*MSS)
Congestion Window
Congestion Window += Increment
2. Slow Start
The purpose of a slow start is to determine the available capacity quickly.
To do this,
- Estimate an optimistic congestion window using congestion
threshold
- Congestion window starts with 1 packet
- For each RTT, congestion window is doubled (increment by 1 packet
for each ACK)
- When congestion threshold is crossed, additive increase is used.
19
Slow start is used when:
First starting a connection, and
Connection goes dead waiting for timeout
3. Fast retransmit
When coarse-grain TCP timeouts lead to idle periods, a fast
retransmit performs the following:
Send an ACK on every packet reception
Send duplicate of last ack when a packet is received out
of order
Use duplicate ACKs to trigger retransmission.
In using fast recovery in TCP Reno, slow start phase is
skipped. Instead, the last successful congestion window is
directly halved.
20
4.3. Comparison of various TCP algorithms
TCPs either control congestion or avoid congestion. As discussed above, TCP Tahoe uses
congestion control as its approach. In contrast, two other TCP types, TCP Reno and TCP
Vegas, use congestion avoidance as their approach.
TCP Reno and its derivatives today use congestion avoidance as follows:
Try to avoid forcing the source to go to slow-start
Source goes to slow start initially and upon timeouts
Source cuts congestion window in half upon a fast retransmit
Single packet drops can be caught by the fast retransmit/ fast recovery algorithm
Multiple consecutive packet drops will force the source into slow start.
CONGESTION AVOIDANCE
To avoid congestion the TCP strategy controls congestion once it happens; and repeatedly
increase load in an effort to find the point at which congestion occurs, and then back off.
An alternative strategy for congestion avoidance is to predict when congestion is about to
happen, then reduce rate before packets start being discarded
TCP VEGAS
TCP Vegas adopts congestion avoidance instead of congestion control. It predicts and avoids
congestions even before they occur.
21
TCP Tahoe and Reno, on the other hand, adopt congestion control. It controls congestion
after it occurs.
To predict congestion, TCP Vegas permits the source to watch for some sign that the router’s
queue is building up which could lead to congestion. In TCP Vegas, congestion can happen
when the RTT grows and the sending rate flattens.
Packet accumulation in the network can be inferred by monitoring RTT and sending rate as
shown graphically below:
22
Two variations are offered by TCP Tahoe and Reno.
To avoid congestion collapse, TCP Tahoe and TCP Reno maintain a congestion window,
limiting the total number of unacknowledged packets that may be in transit end-to-end. This is
somewhat similar to TCP’s sliding window used for flow control.
TCP uses slow start to increase the congestion window after a connection is initialized and
after a timeout. It starts with a window two times the maximum segment size (MSS). Although
the initial rate is low, the rate of increase is rapid: for every packet acknowledged, the
congestion window increases by 1 MSS so that for every round trip time (RTT), the
congestion window is doubled. When the congestion window exceeds a threshold ssthresh
the algorithm performs congestion avoidance (Jacobson, Van, 1995). Congestion Avoidance
and Control, ACM SIGCOMM Computer Communication Review 25 (1): 157–187.
Doi:10.1145/205447.205462).
In some implementations, the initial ssthresh is large, and so the first slow start usually ends
after a loss. However, ssthresh is updated at the end of each slow start, and affects
subsequent slow starts triggered by timeouts.
In congestion avoidance, the congestion window is additively increased by one MSS every
round trip time. When a packet is lost, duplicate ACKs will be received.
The behavior of Tahoe and Reno differ in how they detect and react to packet loss. In TCP
Tahoe, loss is detected when a timeout expires before an ACK is received. Tahoe then
reduces the congestion window to 1 MSS, and reset to slow-start.
On the other hand, in TCP Reno, if three duplicate ACKs are received (i.e., three ACKs
acknowledging the same packet, which are not piggybacked on data, and do not change the
receiver's advertised window), Reno will halve the congestion window, perform a "fast
retransmit", and enter a phase called Slow Start. If an ACK times out, slow start is used in the
same manner as with Tahoe.
In fast recovery. (TCP Reno only) TCP Reno retransmits the missing packet that was signaled
by 3 duplicate ACKs, and waits for an acknowledgment of the entire transmit window before
returning to congestion avoidance. If not acknowledged, TCP Reno experiences a timeout
and enters a slow-start state.
23
In a timeout, both algorithms reduce the congestion window to 1 MSS.
TCP Vegas
Until the mid-1990s, all TCPs set timeouts and measured round-trip delays based upon the
last transmitted packet in the transmit buffer. Larry Peterson and Lawrence Brakmo
introduced TCP Vegas where timeouts were set and round-trip delays were measured for
every packet in the transmit buffer.
Also, TCP Vegas uses additive increases and additive decreases in the congestion window.
24
TASK 5. THE WEB CACHE
a. Explain the function of an http proxy (also known as a web cache) and give an example.
b. Detail three reasons for the recent increase in the deployment of http proxies on the Internet.
c. Explain with examples why the increased application of database-driven dynamic web content reduces the effectiveness of http proxies.
5.1. Introduction
Web caching is storing web documents such as HTML pages and images in order to reduce
bandwidth usage, server load, and perceived lag. It increases processing speed by storing
copies of documents passing through the cache and making these stored documents
available when subsequent requests for the same documents are made without having to
again access the source.
This task describes the functions of an HTTP proxy, reasons for their increased use, and
explains why the use of database-driven dynamic content reduces the effectiveness of HTTP
proxies (Wikipedia, Web Cache, accessed April 20, 2009).
5.2. Functions of an HTTP proxy
HTTP is a request/response standard of a client and a server. A client is the end-user, while
the server is the web site. The client making a HTTP request is called a user agent.
The server which saves or creates resources such as HTML files and images is called the
origin server. In between the user agent and origin server are intermediaries, such as proxies,
gateways, and tunnels (Fielding, et al., Internet RFC 2616, section 1.4, Retrieved on January
21, 2009).
When an HTTP client starts a request it establishes a Transmission Control Protocol (TCP)
connection to a port on a host (port 80 by default). An HTTP server that port waits for the
client to send a request message. When a request is received, the server sends back a status
line, such as "HTTP/1.1 200 OK", and a message of its own, the requested resource, or an
error message (Wikipedia, HTTP, accessed April 20, 2009).
25
5.3. Reasons for the increase in the use of HTTP proxies
HTTP proxy servers are popular because it is used by many browsers and download
managers. The use of HTTP proxies have encouraged the development of many uses such
as those used in news media, finance organizations, and search organizations.
An article on HTTP in Stayinvisible.com describes HTTP proxies as allowing working on the
Internet with HTTP and FTP protocols. HTTP proxies stores information downloaded from the
Internet then uses the proxy as the source when subsequent searches are made. Some of the
abilities of HTTP proxies that encourages its use are as follows:
1. Anonymity of HTTP Proxy
HTTP proxy servers have several anonymity levels. These are:
- Transparent level - these proxies are not anonymous. They let web servers know that a
proxy server is used and show the IP address of a client. The task of such proxies is
information caching and support of Internet access for several computers via a single
connection.
- Anonymous level - these proxy servers let a remote computer or a web-server know that
a proxy server is used but does not reveal the IP address of a client.
- Distorting level - unlike transparent and anonymous, distorting proxy servers transfer an
IP address to a remote web server but shows a randomly generated IP address.
- High anonymity level – they do not let web servers know that a proxy server is being
used, including the IP address. (http proxy servers, stayinvisible.com).
2. HTTP Proxy Chaining
HTTP proxies can be organized into a chain and this improves anonymity on the Internet. If
a corporate proxy and Internet access is possible only through it, you can build a chain
based on the corporate proxy:
If the corporate proxy is a SOCKS proxy, any chains can be made.
26
if the corporate proxy is an HTTP proxy you can create a chain using only HTTP and
CGI proxies (Stayinvisible.com/, HTTP Proxy Servers).
5.4. Effectiveness of HTTP proxies
The increased use of database-driven dynamic web content reduces the effectiveness of http
proxies. The reason for this is the transfer of proxies from the client to the website where
information is coming from. Because the database already includes static information, there is
very little need for a proxy to reside in the client server.
The increase in database-driven web sites in banking services, news organizations and
similar sites have reduced the effectiveness of the use of HTTP proxies in client servers
because this function has been moved from the client to the source site.
This means that you have a web page that grabs information from a database and inserts that
information into the web page each time it is loaded.
If the information in the database changes, the web page also change accordingly without
human intervention.
This is seen on online banking sites where you log in and check your bank account balance.
Your bank account information is stored in a database and has been connected to the web
page with programming thus enabling you to see your banking information (Killersites.com,
accessed April 20, 2009).
27
6. REFERENCES
Abhishek Singh, CISSP, December 14, 2005, Demystifying Denial-Of-Service Attacks, Part 1
http://www.securityfocus.com/infocus/1853
Atul Kahate, July 11, 2007, What are Digital Signatures? Compute and Verify a Digital
Signature Using Java,
http://www.indicthreads.com/1480/what-are-digital-signatures-compute-and-verify-a-digital-
signature-using-java/
Clemm, A., Network Management Fundamentals, CiscoPress, 2006
http://en.wikipedia.org/wiki/Simple_Network_Management_Protocol
Dancho Danchev, January 7, 2005, Passwords - Common Attacks and Possible Solutions
http://www.windowsecurity.com/articles/Passwords-Attacks-Solutions.html
Fielding, et al., Internet RFC 2616, section 1.4,
http://en.wikipedia.org/wiki/HTTP
Jacobson, Van,1995, Congestion Avoidance and Control, ACM SIGCOMM Computer
Communication Review 25 (1): 157–187. doi:10.1145/205447.205462.
http://ee.lbl.gov/papers/congavoid.pdf.
Microsoft Technet, Windows Media Services Deployment Guide,
http://technet.microsoft.com/en-us/library/cc726032.aspx
Tanase, Matthew , IP Spoofing: An Introduction, March 11, 2003
http://www.securityfocus.com/infocus/1674
http://www.iss.net/Industry/educational_institutions/index.html
http://burks.bton.ac.uk/burks/pcinfo/hardware/ethernet/managed.htm
Networkdictionary.com, Network Management Technologies,
http://www.networkdictionary.com/networking/NetworkManagementTechnologies.php
28
Wikipedia, Brute Force Attach,
http://en.wikipedia.org/wiki/Brute_force_attack
Wikipedia, Internet Protocol Suite,
http://en.wikipedia.org/wiki/Internet_protocol_suite
Wikipedia, Managed Objects
http://en.wikipedia.org/wiki/Managed_object
Wikipedia, Management Information Base,
http://en.wikipedia.org/wiki/Management_Information_Base
Wikipedia, Network Management,
http://en.wikipedia.org/wiki/Network_management
Wikipedia, Public Key Cryptography
http://en.wikipedia.org/wiki/Public-key_cryptography
Wikipedia, Simple Network Management Protocol,
http://en.wikipedia.org/wiki/Simple_Network_Management_Protocol
Wikipedia, Management Information Base,
http://en.wikipedia.org/wiki/Management_Information_Base
http://technet.microsoft.com/en-us/library/cc726032.aspx
Wikipedia, Message Authentication Code
http://en.wikipedia.org/wiki/Message_authentication_code
Wikipedia, TCP Congestion Avoidance Algorithm’
http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm
Wikipedia, Web Cache,
http://en.wikipedia.org/wiki/Web_cache
http://www.stayinvisible.com/proxy_encyclopedia/http_proxy_servers.html
29
http://www.killersites.com/articles/articles_databaseDrivenSites.htm
7. BIBLIOGRAPHY
Fred B Schneider, Hashes and Message Digests, Cornell University
Fall, Kevin; Sally Floyd (July 1996), Simulation-based Comparisons of Tahoe, Reno and SACK
TCP" (PostScript), Computer Communications Review, ftp://ftp.ee.lbl.gov/papers/sacks.ps.Z.
Jacobson, Van (1995), Congestion Avoidance and Control, ACM SIGCOMM Computer
Communication Review 25 (1): 157–187. doi:10.1145/205447.205462,
http://ee.lbl.gov/papers/congavoid.pdf.
Ari Luotonen, Web Proxy Servers (Prentice Hall, 1997), ISBN 0-13-680612-0.
Duane Wessels, Web Caching (O'Reilly and Associates, 2001), ISBN 1-56592-536-X.
Rabinovich, Michael and Spatschak, Oliver, Web Caching and Replication (Addison Wesley,
2001), ISBN 0-201-61570-3.
30