wooyoung kim fall 2009 4.2 request/reply communication

46
WOOYOUNG KIM FALL 2009 4.2 Request/Reply Communication

Upload: ashanti-plock

Post on 14-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

WOOYOUNG KIMFALL 2009

4.2 Request/Reply Communication

Part I : Basic Information [1]

Introduction Request /Reply communication Remote Procedure Call (RPC)

RPC Operations Parameter Passing and Data Conversion Binding RPC compilation

RPC Exception and Failure HandlingSecure RPC

Introduction

Request/Reply CommunicationCommon technique for one application to request the services of another.

Remote Procedure Call (RPC)

Most widely used request/reply communication model

A language-level abstraction of the request/reply

communication mechanism

How RPC work?

What is the implementation issues for RPC?

Introduction – Cont’d

RPC Operations

RPC vs. local procedure call.

Similar in syntax as two have ‘calling’ and

‘waiting’ procedures – RPC provides access

transparency to remote operations.

Different in semantics, because RPC involves

delays and failures (possibly).

RPC Operations – how it works?

This operation exposes some issues when implementing

RPC Operations – implementation issues

Parameter passing and data conversion.

Binding – locating server and registering the

service

Compilation – origination of stub procedures

and linking

Exception and failure handling

Security

Parameter Passing

In a single process : via parameters and/or global variables

In multiple processes on the same host : via message passing .

However, RPC-based clients and servers : passing parameters is typically the only way

Parameter marshaling : Rules for parameter passing and data/message conversion. Primary responsibility of Stub procedure

Parameter Passing – Cont’d

Call-by-value are fairly simple to handle The client stub copies the value packages into a network

message

Call-by -name requires dynamic run-time evaluation of

symbolic expression.

Call-by-reference is hard to implement in distributed

systems with non-shared memory.

Call-by-copy/restore: combination of call-by-value and

call-by-reference. Call-by-value at the entry of call and

call-by-reference to the exit of the call

Parameter Passing – Cont’d

Most RPC implementations assume that

parameters passed by call-by-value and call-

by-copy/restore.

Data Conversion

Three problems in conversion between data

and message

Data typing

Data representation

Data transfer syntax

Data Conversion – Cont’d

Type checking across machines is difficult, because

the data is passed through interprogram messages.

Data should carry type information?

Each machine has its own internal representation of the data

types.

Complicated by the Serial representation of bits and bytes in

communication channels.

Different machines have different standards for the bits or

bytes with the least or the most significant digit first.

Data Conversion – Cont’d

Transfer syntax

Rules regarding of messages in a network.

For n data representations, n*(n-2)/2 translators are

required.

Better solution: inventing an universal language : 2*n

translators.

However, this increase the packing/unpacking

overhead.

Data Conversion – Cont’d

ASN.1

Abstract Syntax Notation One

Most important developments in standards.

Used to define data structures.

Used for specifying formats of protocol data units in

network communications.

Data Conversion – Cont’d

ASN.1 and transfer syntax are the major

facilities for building network presentation

services.

ASN.1 can be used directly in data representation for

RPC implementations.

Data types are checked during stub generation and

compilation. Providing type information in messages

is not necessary.

Data Conversion– Cont’d

Examples of canonical data representations

for RPC

Sun ’s XDR: eXternal Data Representation

DCE’s IDL : Interface Definition Language

Binding

Binding is the process of connecting the client to

the server

Services are specified by a server interface with

interface definition language such as XDR.

Binding – Cont’d

1. The server, when it starts up Register its communication endpoint by sending a request

(program, version number, port number) to the port

mapper .

Port mapper manages the mapping.

2. Before RPC, client call RPC run-time library routine

create, which contacts the port mapper to obtain a

handle for accessing. Create message contains the server name, program, version

number, transport protocol.

Binding – Cont’d

3. Port mapper verifies the program and version

numbers, returns the port number of the server

to the client.

4. Client builds a client handle for subsequent use

in RPC. This establishes socket connections

between clients and server.

Binding – Cont’d

client

port mapper

server

directory server

1. register2. create

3. port #

4. handle

Register service (if server is unknown)

Ser

ver

mac

hine

add

ress

or

han

dle

to s

erve

r

RPC compilation

Compilation of RPC requires the followings:

1. An interface specification file

2. An RPC generator : input is the interface specification

file and output is the client and server stub procedure

source codes.

3. A run-time library for supporting execution of an RPC,

including support for binding, data conversion, and

communication.

RPC Exception and Failure Handling

Exceptions

Abnormal conditions raised by the execution of stub and

server procedures.

Ex. Overflow/underflow, protection violation.

Failures

Problems caused by crashes of clients, servers, or the

communication network.

Exception Handling

Exceptions must be reported to the clients.

Question: how the server report status information to

clients?

A client may have to stop the execution of a

server procedure.

Question: how does a client send control information to a

server?

Exception Handling – Cont’d

In local procedure call: global variables and signals.

In computer network, the exchange of control and

status information must rely on a data channel.

In-band signaling, or out-band signaling (flag).

Separate channel (socket connection) – more flexible

for RPC

It is implemented as part of the stub library support

and should be transparent.

Failure Handling

Cannot locate the server

nonexistent server, or outdated program

handle like an exception.

Messages can be delayed or lost

eventually detected by a time-out or by no response

from the server.

The messages can be retransmitted.

Failure Handling – Cont’d

Problem with Retransmission of requests.

In case of delay, server get multiple requests

-> make it idempotent (can be executed multiple times

with the same effect)

In case of idempotent impossible (lock servers), each

request has sequence number.

Typical RPC do not use sequence numbers – only

requests-based.

Failure Handling – Cont’d

Crash of a server.

Client attempts to reestablish a connection, and

retransmits its request.

If server not fail, but TCP connection fail: examine the

cache table for duplicated message.

If server failed, then cache table lost. Then raise

exception.

Failure Handling – Cont’d

Three assumptions for RPC semantics in failures.

Server raise exception, client retries. At least once

Server raise exception, client give up immediately At

most once

No error report from server, client resubmits until it

gets or give up Maybe

Failure Handling – Cont’d

Most desirable RPC semantics is exactly once.

But hard to implement.

Loss of cache table: at least once and log the cache

table to storage.

Reload the cache table when server recovers.

Overhead since each service must be executed as a

transaction at the server.

Failure Handling – Cont’d

Crash of a client process.

Server has an orphan computation and its reply is

undeliverable.

orphan computation waist server resources, may confuse

the client with invalid replies from previous connections.

How to eliminate orphan computation?

Client: On reboot, cleans up all previous requests.

Server: Occasionally locate owners of requests.

Expiration: Each remote operation is given a maximum lifetime.

Secure RPC

Security is important for RPC, since

1. RPC introduces vulnerability because it opens doors for

attacks.

2. RPC became a cornerstone of client/server computation. All

security features should be build on top of a secure RPC.

Primary security issues

Authentication of processes.

Confidentiality of messages.

Access control authorization from client to server.

Secure RPC – Cont’d

Authentication protocol for RPC should establish:

1. Mutual authentication.

2. Message integrity, confidentiality, and originality.

Design of a secure authentication protocol How strong the security goals.

What possible attacks

Some inherent limitations of the system.

Short-term solution: additional security features.

Secure RPC – Cont’d

Sun secure RPC

Built into Sun’s basic RPC.

Assume a trusted Network Information Service (NIS),

which keeps a database and secret keys.

The keys are for generating a true cryptographical session

key.

When user login, NIS gives the key. With user password,

the key used to decrypt the secret key, discard password.

Password are not transmitted in the network.

Secure RPC – Cont’d

Sun secure RPC – example

1. Client login attempt

2. login program deposit the client’s key in the key server.

3. Key server generating a common session key, by exponential

key exchange.

4. Secrete keys are erased after common session keys generated.

5. Each RPC message is authenticated by a conversation key.

6. Conversation key is kept in server, used for the entire session,

as it is not from the secrete key.

Secure RPC – Cont’d

Sun secure RPC – RPC message may contain

more

Timestamp : check message expiration

Nonce : protect against the replay of a message

Message digest: detect any tampering.

Sun secure RPC is simple, using existing NIS.

SUN’S SECURE RPC

Other RPC Industry Implementations [2]

1984 - ONC RPC/NFS (Sun Microsystems Inc.)

Early 1990s - DCE RPC (Microsoft)

Late 1990’s – ORPC (Object Oriented Programming

Community)

1997 – DCOM (Microsoft)

2002 - .NET Remoting (Microsoft)

Doors (Solaris)

2003-ICE (Internet Communications Engine)

DCOP - Desktop Communication Protocol (KDE)

Part II : Current Projects

ICE (Internet Communications Engine) [3,4,5]

ICE is object-oriented middleware providing RPC, grid

computing, and publish/subscribe functionality.

Influenced by CORBR (Common Object Request Broker

Architecture) in its design, and developed by ZeroC, Inc.

Supports C++, Java, .NET-languages, Objective-C,

Python, PHP, and Ruby on most major operating

systems.

ICE components

Figure from www.wikipedia.org

ICE components

IceStorm: object-oriented publish-and-subscribe framework

IceGrid: provide object-oriented load balancing, failover, object-

discovery and registry services.

IcePatch :facilitates the deployment of ICE based software

Glacier: a proxy-based service to enable communication through

firewalls

IceBox: a SOA-like container of executable services implemented with

libraries

Slice: file format that programmers follow to edit. Servers should

communicate based on interfaces and classes as declared by the slice

definitions.

Current Project with ICE [5]

Ice middleware in the New Solar Telescope’s

Telescope Control System, 2008 [5]

NST (new solar telescope) is an off-axis solar

telescope with the world largest aperture. Develop TCS (telescope control system) to control all

aspects of the telescope Telescope Pointing Tracking Subsystem Active Optics Control Subsystem Handheld Controller Main GUI

Current Project with ICE-Cont’d [5]

Ice advantages Provides fast and scalable communications. Simple to use Ice Embedded (Ice-E) supports Microsoft Windows

Mobile operating system for handheld devices. Source code of Ice is provided under the GNU

(General Public License) Continuously updated.

Ice problem Frequent package updates cause changes of coding.

Current Project with ICE-Cont’d [5]

TCS Implementation Star-like structure: all subsystems through HQ

(headquarters). Each subsystem acts as a server and a client. Each subsystem use the same ICE interface. Interface for every object includes seven operations;

Register, Unregister, SendError, SendCommand,

RequestInformation, SendNotification, SendReply, Error

Subsystems can be started in any order, only need to

register with HQ and IcePack registry.

Part III : Future Works

Compatible updates with old versions (ex.

ICE)

Trend on Object-oriented implementation:

General-purpose tool to construct object-based

modular systems, transparently distributed at run-

time.

Reference

1. Randy Chow, Theodore Johnson, “Distributed Operating

Systems & Algorithms”, 1997

2. Interprocess Commnications

http://en.wikipedia.org/wiki/Interprocess_communication

3. Zeros, Inc. http://zeroc.com/ice.html

4. ICE in wikipedia,

http://en.wikipedia.org/wiki/Internet_Communications_Engine

5. Shumko, Sergij. "Ice middleware in the New Solar

Telescope's Telescope Control System". Astronomical Data

Analysis Software and Systems XVII, ASP Conference

Series, Vol. XXX, 2008., Canada.

Thank You