ugc net questions solution

12
Binary exponential backoff / truncated exponential backoff In a variety of computer networks , binary exponential backoff or truncated binary exponential backoff refers to an algorithm used to space out repeated retransmissions of the same block of data, often as part of network congestion avoidance . Examples are the retransmission of frames in carrier sense multiple access with collision avoidance (CSMA/CA) and carrier sense multiple access with collision detection (CSMA/CD) networks, where this algorithm is part of the channel access method used to send dat a on these network. In Ethernet networks, the algorithm is commonly used to schedule retransmissi ons after collisions. The retransmission is delayed by an amo unt of time derived from the slot time and the number of attempts to retransmi t. After c collisi ons, a random number of slot times between 0 and 2 c - 1 is chosen. Fo r the first collisi on, each sender will wait 0 or 1 slot times. After the second collision, the senders will wait anywhere from 0 to 3 slot times ( inclusive). After the third co lli sion, the senders will wait anywhere from 0 to 7 slot times (inclusi ve), and so forth. As the number of retransmission attempts increases, the number of possibilities for delay increases exponentially . The 'truncated' simply means that after a certain number of increases, the exponentiation sto ps; i.e. the retransmission timeout reaches a ceiling, and t hereafter does not increase any further. For example, if the ceiling is set at i = 10 (as it is in the IEEE 802.3 CSMA/CD standard [1] ), then the maximum delay is 1023 slot t imes. An example of an exponential backoff algorithm This example is from the Ethernet  protocol [2] , where a sending host is able to know when a collisi on has occurred (that is, another ho st has tried to transmit), when it is sending a frame. If  both hosts attempted to retransmit as soon as a collisi on o ccurred, there would be yet another collision ² and the pattern wo uld continue forever. The hosts must choose a random value within an acceptable range to ensure that this situation doesn't happen. An exponential backoff algorithm is therefore used. The figure 51.2s has been given here as an example. However, 51.2s could be replaced by any positive value, in practice. 1. When a collision first occurs, send a Jamming signal to prevent further data being sent. 2. Resend a frame after ei ther 0 seconds or 51.2s, chosen at random. 3. If that fails, resend the frame after either 0s, 51.2s, 102.4s, or 153.6s. 4. If that still doesn't work, resend the frame after k · 51.2s, where k is a random number between 0 and 2 3 1. 5. In general, after the cth failed attempt, resend the frame after k · 51.2s, where k is a random number between 0 and 2 c 1. Expected backoff 

Upload: mrinal-sharma

Post on 09-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 1/12

Binary exponential backoff / truncated exponential backoff 

In a variety of computer networks, binary exponential backoff or truncated binary

exponential backoff refers to an algorithm used to space out repeated retransmissions of the

same block of data, often as part of network congestion avoidance.

Examples are the retransmission of frames in carrier sense multiple access with collision

avoidance (CSMA/CA) and carrier sense multiple access with collision detection (CSMA/CD)networks, where this algorithm is part of the channel access method used to send data on these

network. In Ethernet networks, the algorithm is commonly used to schedule retransmissions after collisions. The retransmission is delayed by an amount of time derived from the slot time and the

number of attempts to retransmit.

After c collisions, a random number of slot times between 0 and 2c

- 1 is chosen. For the firstcollision, each sender will wait 0 or 1 slot times. After the second collision, the senders will wait

anywhere from 0 to 3 slot times (inclusive). After the third collision, the senders will wait

anywhere from 0 to 7 slot times (inclusive), and so forth. As the number of retransmissionattempts increases, the number of possibilities for delay increases exponentially.

The 'truncated' simply means that after a certain number of increases, the exponentiation stops;i.e. the retransmission timeout reaches a ceiling, and thereafter does not increase any further. For example, if the ceiling is set at i = 10 (as it is in the IEEE 802.3 CSMA/CD standard

[1]), then the

maximum delay is 1023 slot times.

An example of an exponential backoff algorithm

This example is from the Ethernet protocol

[2]

, where a sending host is able to know when acollision has occurred (that is, another host has tried to transmit), when it is sending a frame. If  both hosts attempted to retransmit as soon as a collision occurred, there would be yet another 

collision ² and the pattern would continue forever. The hosts must choose a random valuewithin an acceptable range to ensure that this situation doesn't happen. An exponential backoff 

algorithm is therefore used. The figure 51.2s has been given here as an example. However,51.2s could be replaced by any positive value, in practice.

1.  When a collision first occurs, send a Jamming signal to prevent further data being sent.

2.  Resend a frame after either 0 seconds or 51.2s, chosen at random.

3.  If that fails, resend the frame after either 0s, 51.2s, 102.4s, or 153.6s.

4.  If that still doesn't work, resend the frame after k · 51.2s, where k is a random number

between 0 and 23 1.

5.  In general, after the cth failed attempt, resend the frame after k · 51.2s, where k  is a random

number between 0 and 2c

1.

Expected backoff 

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 2/12

Given a uniform distribution of backoff times, the expected backoff time is the mean of the possibilities. That is, after c collisions, the number of backoff slots is in [0, 1, ..., N ] where N = 2

- 1 and the expected backoff time (in slots) is

.

For example, the expected backoff time for the third (c = 3) collision, one could first calculatethe maximum backoff time, N :

N = 2c 1

N = 23

1 = 8 1

N = 7

... and then calculate the mean of the backoff time possibilities:

... obtaining 3.5 as the expected number of back off slots after 3 collisions.

The above derivation is largely unnecessary when you realize that the mean of consecutiveintegers is equal to the mean of the largest and smallest numbers in the set. That is, the mean of a

set  A of consecutive integers a0, a1, a2, ...am is simply the mean of a0 and am or (a0 + am) / 2.

When applied to the above problem of finding the expected backoff time, the formula becomessimply:

... or otherwise interpreted as half of the maximum backoff time

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 3/12

Server Crashes

  Server can crash either before executing or after executing (before sending reply)

  Crash after execution needs to be reported to client

  Crash before execution can be handled by retransmission

  Clients OS cannot distinguish between the two

A server in client-server communication

a)  Normal case

b)  Crash after execution

c)  Crash before execution

Handling Server Crashes

  Wait until server reboots and try again

 ±   At least once semantics

  Give up immediately and report failure

 ±   At most once semantics

  Guarantee nothing

  The need is for exactly once semantics

  Two messages to clients

 ±   Request acknowledgement

 ±   Completion message

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 4/12

Server and Client Strategies

  Server strategies

 ±   Send completion message before operation

 ±   Send completion message after operation

  Client strategies

 ±   Never reissue a request

 ±   Always reissue a request

 ±   Only reissue request if acknowledgement not received

 ±   Only reissue if completion message not received

  Client never knows the exact sequence of crash

  Server failures changes RPC fundamentally

In serializability, ordering of read/writes is important:

(a) If two transactions only read a data item, they do not conflict and order is not important.

(b) If two transactions either read or write completely separate data items, they do not conflict and

order is not important.

(c) If one transaction writes a data item and another reads or writes same data item, order of execution

is important.

Serializability Violations

Two transactions T1 and T2 are said to conflict if some action t1 of T1 and an action t2 of T2 access the

same object and at least one of the actions is a write. The conflict is called a RW-conflict if the write set

of one transaction intersects with the read set of another. A WW-conflict occurs if the conflict is

between two writes.

  Result is not equal to any serial execution!

  W- R conflict: T2 reads something T1 wrote previously (dirty read).

RW Conflicts (Unrepeatable Read)

  T2 overwrites what T1 read.

  If T1 reads it again, it will see something new!

Example when this would happen? The increment/ decrement example.

  Again, not equivalent to a serial execution.

WW Conflicts (Overwriting Uncommitted Data)

  T2 overwrites what T1 wrote. Example: 2 transactions to update items to be kept equal.

  Usually occurs in conjunction with other anomalies.

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 5/12

Aborted Transactions: All actions of aborted transactions are to be undone

  as if aborted transactions never happened. 

Two-Phase Locking (2PL)

Strict 2PL:

  If T wants to read an object, first obtains an S lock.  If T wants to modify an object, first obtains X lock.

  Hold all locks until end of transaction.

  Guarantees serializability, and recoverable schedule, too! also avoids WW problems!

2PL:

  Slight variant of strict 2PL

  transactions can release locks before the end (commit or abort)

? But after releasing any lock it can acquire no new locks Guarantees serializability

A two-phase locking ( 2PL) scheme is a locking scheme in which a transaction cannot request a new lock

after releasing a lock. Two phase locking therefore involves two phases:

  Growing Phase ( Locking Phase) - When locks are acquired and none released.  Shrinking Phase ( Unlocking Phase) - When locks are released and none acquired.

The attraction of the two-phase algorithm derives from a theorem which proves that the two-phase

locking algorithm always leads to serializable schedules. This is a sufficient condition for serializability

although it is not necessary.

Strict two-phase locking ( Strict 2PL) is the most widely used locking protocol, and has following two

rules:

  If a transaction wants to read (respectively, modify) an object, it first requests a shared (respectively,

exclusive) lock on the object.

 All locks held by a transaction are released when the transaction is completed

In effect the locking protocol allows only safe interleavings of transactions.Q) Three transactions A, B and C arrive in the time sequence A, then B and then C.The transactions are

run concurrently on the database. Can we predict what the result would be if 2PL is used?

  No, we cannot do that since we are not able to predict which serial schedule the 2PL schedule is going

to be equivalent to. The 2PL schedule could be equivalent to any of the following six serial schedules:

ABC, ACB, BAC, BCA, CAB, CBA.

Two-Phase Locking (2PL)

Transaction follows 2PL protocol if all locking operations precede first unlock operation in the

transaction.

Two phases for transaction:

  Growing phase - acquires all locks but cannot release any locks.

  Shrinking phase - releases locks but cannot acquire any new locks.

Preventing Lost Update Problem using 2PL Eg in slides

Preventing Uncommitted Dependency Problem using 2PL

The refresh rate on a computer is used to describe how often a monitor draws the current data to

the screen. Refresh rate effects visual quality for CRT, LCD and even HD TVs as they are

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 6/12

commonly connected to computers for high definition playback. Refresh rate is not to be

confused with frame rate. Refresh rate is a measure of how often a frame is written to the screen

repeated or not, and frame rate refers to how often a new frame is sent from the graphics source.

Definition: Resolution is the term used to describe the number of dots, or pixels, used to display

an image.

Higher resolutions mean that more pixels are used to create the image, resulting in a crisper,

cleaner image.

The display, or resolut ion on a monitor, is composed of thousands of pixels or dots. This display

is indicated by a number combination, such as 800 x 600. This indicates that there are 800 dotshorizontally across the monitor, by 600 lines of dots vertically, equaling 480,000 dots that make

up the image you see on the screen.

About XMLXML (eXtensible Markup Language) provides a set of rules for defining semantic tags that can

describe virtually any type of data in a text file. Data stored in XML-format files is both human-and machine-readable, and is often relatively easy to interpret either visually or 

 programmatically. The structure of data stored in an XML file is described by either a DocumentType Definition (DTD) or an XML schema, which can either be included in the file itself or 

referenced from an external network location.

 About XML Parsers

There are two basic types of parsers for XML data:

y  Tree-based parsers

y  Event-based parsers.

Tree-Based Parsers

Tree-based parsers map an XML document into a tree structure in memory, allowing you to

select elements by navigating through the tree. This type of parser is generally based on theDocument Object Model (DOM) and the tree is often referred to as a DOM tree. The

IDLff XMLDOM object classes implement a tree-based parser; for more information, see Usingthe XML DOM Object Classes.

Tree-based parsers are especially useful when the XML data file being parsed is relatively small.Having access to the entire data set at one time can be convenient and makes processing data

 based on multiple data values stored in the tree easy. However, if the tree structure is larger than

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 7/12

will fit in physical memory or if the data must be converted into a new (local) data structure before use, then tree-based parsers can be slow and cumbersome.

Ev ent-Based Parsers

Event-based parsers read the XML document sequentially and report parsing events (such as thestart or end of an element) as they occur, without building an internal representation of the datastructure. The most common examples of event-based XML parsers use the Simple API for 

XML (SAX), and are often referred to as a SAX parsers.

Event-based parsers allow the programmer to write callback rout ines that perform an appropriateaction in response to an event reported by the parser. Using an event-based parser, you can parse

very large data files and create application-specific data structures. The IDLff XMLSAX objectclass implements an event-based parser based on the SAX version 2 API.

What is a parser?

 ±  A program that analyses the grammatical structure of an input, with respect to a givenformal grammar 

 ±   The parser determines how a sentence can be constructed from the grammar of the

language by describing the atomic elements of the input and the relationship among

them

Anti-aliasing

In digital signal processing, anti-aliasing is the technique of minimizing the distortion artifacts known as

aliasing when representing a high-resolution signal at a lower resolution. Anti-aliasing is used in digital

photography, computer graphics, digital audio, and many other applications.

disadvantages 

y  It requires a polygonal model.

y  It requires explicitly finding discontinuity edges (e.g. silhouettes), which can beexpensive for dynamic models. Delay streams, a hardware mechanism recently proposed

 by Aila et al. [2003], can be used to identify discontinuity edges and reduce the CPUload.

y  It requires an extra rendering pass to draw the discontinuity edges. On the plus side, thenumber of vertices required is small compared to the entire model. Similarly, the number 

of pixels drawn at discontinuity edges is a tiny fraction of the framebuffer (typicallyaround 1%).

y  For proper compositing, this method requires a back-to-front sort of the edges. Delaystreams can also be used to accelerate this step in hardware.

So What is Anti-Aliasing? 

Anti-aliasing allows the colors at the edge of pixels to bleed into one another, creating a sort of blurred

effect. It may sound counterintuitive, but blurring the edges of each individual pixel will result in sharper

images with smoother lines and more natural color differentiation.

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 8/12

As an experiment, try taking one of your digital photographs and dramatically reducing it in size. You will

no doubt notice that as the size decreases, the lines of objects in the photograph will seem smoother. As

the pixels in the image shrink, they become less visible to the naked eye.This is a similar effect to anti-

aliasing slightly blurring each pixel makes it stand out less and blend more smoothly into the image.

Most digital cameras come fully equipped with a built in anti-aliasing feature. Anti-aliasing will make

your photographs look more natural and will help to offset any loss of quality caused by a lower

resolution camera or setting. While it is always advisable to shoot at the highest resolution your camera

can manage, anti-aliasing is a helpful tool in ensuring your images always turn out great.

Anti-aliasing is one of those features that you may have never noticed, but once you understand how

they impact your photographs you cant imagine doing without. Jagged lines are one of the major

downfalls to digital imagery, and anti-aliasing helps to bridge the gap between ultra-high-resolution

cameras and equipment that falls more in the price range of the average hobbyist. If your camera has an

anti-aliasing feature, turn it on and leave it on your photos will thank you.

z-Buffer Algorithm

The z-Buffer algorithm is one of the most commonly used routines. It is simple, easy to

implement, and is often found in hardware.

The idea behind it is uncomplicated: Assign a z-v alue to each polygon and then display the one

(pixel by pixel) that has the smallest v alue.

There are some advantages and disadvantages to t his:

 Advantages:

y Simple to use

y Can be implemented easily in object or image sapce

y Can be executed quickly, ev en with many polygons

Disadvantages:

y Takes up a lot of memory 

y Can't do transparent surfaces without additional code

z-buffer Algorithm

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 9/12

  Store a z-buffer along with the frame buffer. A z-value is stored for each pixel

  Each z-value is initialized to back of the clipping plane. The frame buffer is initialized to the

background color

  Polygons are scan-converted in arbitrary order

  If the polygon being scan-converted at a pixel is no farther from the viewer than is the point

whose color and depth are currently in the buffers, then the new points color and depth

replace the old values

z-buffer Algorithm: Advantages

  Simplicity

  Polygons can be rendered in any order

  The primitives need not be polygons (need to compute depth and shading values at each pixel)

  The time taken by the visible-surface calculation tends to be independent of the number of 

polygons on average or is the complexity linear???

  Can be extended to handle CSG objects

  The z-buffer can be saved with the image and used later to merge in other objects whose z can

be computed

z-buffer Algorithm: Disadvantages

  High memory requirements

  The required z-precision is maintained at a higher resolution as compared to x,y precision

  Aliasing problems (due to finite-precision)

  Rendering almost co-linear edges or co-planar polygons (visible artifacts)

  Issues in implementing transparency and translucency effects (due to arbitrary order of 

rendering)

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 10/12

 

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 11/12

 

Concurrency control mechanisms

 [ edit  ]Types of mechanisms

The main categories of concurrency control mechanisms are:

y  Optimistic - Delay the checking of whether a transaction meets the isolation and other integrity

rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write)

operations, and then abort a transaction if the desired rules are violated.

y  Pessimistic - Block an operation of a transaction if it may cause violation of the rules until the

possibility of violation disappears.

y  Semi-optimistic - Block operations in some situations, if they may cause violation of some rules,

and do not block in other situations while delaying rules checking to transaction's end, as done

with optimistic.

Many methods for concurrency control exist. Most of them can be implemented within either 

main category above. Major methods, which have each many variants, and in some cases mayoverlap or be combined, include:

8/8/2019 ugc net questions solution

http://slidepdf.com/reader/full/ugc-net-questions-solution 12/12

y  Two-phase locking (2PL) - Controlling access to data by locks assigned to the data

y  Serialization (also called Serializability, or Conflict, or Precedence) graph checking - Checking for

cycles in the schedule's graph.

y  Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking

access to data by timestamp order.

Pessimistic assumes conflicts will occur and avoids them through exclusive locks and explicit

synchronization. With the optimistic approach, it is assumed conflicts won't occur, and they

are dealt with when they happen. 

What is the relationship between clipping and repainting?

When a window is repainted by the AWT painting thread, it sets the clipping regions to the area

of the window that requires repainting.