second rev w

Upload: vijay-sagar

Post on 02-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Second Rev w

    1/15

    SECOND REVIEW REPORT

    MONTE CARLO SIMULATION

    Monte Carlo methods are a class of computational algorithms that rely on

    repeated random sampling to compute their results. Monte Carlo methods are often used

    in simulating physical and mathematical systems. Because of their reliance on repeated

    computation of random or pseudo-random numbers, these methods are most suited to

    calculation by a computer and tend to be used when it is infeasible or impossible to

    compute an exact result with a deterministic algorithm

    Monte Carlo simulation methods are especially useful in studying systems with a largenumber of coupled degrees of freedom, such as fluids, disordered materials, strongly

    coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte

    Carlo methods are useful for modeling phenomena with significant uncertainty in inputs,

    such as the calculation of risk in business. These methods are also widely used in

    mathematics: a classic use is for the evaluation of definite integrals, particularly

    multidimensional integrals with complicated boundary conditions. It is a widely

    successful method in risk analysis when compared with alternative methods or human

    intuition. When Monte Carlo simulations have been applied in space exploration and oil

    exploration, actual observations of failures, cost overruns and schedule overruns are

    routinely better predicted by the simulations than by human intuition or alternative "soft"

    methods.

    There is no single Monte Carlo method; instead, the term describes a large and widely-

    used class of approaches. However, these approaches tend to follow a particular pattern:

    1. Define a domain of possible inputs.

    2. Generate inputs randomly from the domain using a certain specified probability

    distribution.

    3. Perform a deterministic computation using the inputs.

    4. Aggregate the results of the individual computations into the final result.

  • 7/27/2019 Second Rev w

    2/15

    For example, the value of can be approximated using a Monte Carlo method:

    1. Draw a square on the ground, then inscribe a circle within it. From plane

    geometry, the ratio of the area of an inscribed circle to that of the surrounding

    square is / 4.

    2. Uniformly scatter some objects of uniform size throughout the square. For

    example, grains of rice or sand.

    3. Since the two areas are in the ratio / 4, the objects should fall in the areas in

    approximately the same ratio. Thus, counting the number of objects in the circle

    and dividing by the total number of objects in the square will yield an

    approximation for / 4. Multiplying the result by 4 will then yield an

    approximation for itself.

    Notice how the approximation follows the general pattern of Monte Carlo algorithms.

    First, we define a domain of inputs: in this case, it's the square which circumscribes our

    circle. Next, we generate inputs randomly (scatter individual grains within the square),

    then perform a computation on each input (test whether it falls within the circle). At the

    end, we aggregate the results into our final result, the approximation of. Note, also, two

    other common properties of Monte Carlo methods: the computation's reliance on good

    random numbers, and its slow convergence to a better approximation as more data points

    are sampled. If grains are purposefully dropped into only, for example, the center of the

    circle, they will not be uniformly distributed, and so our approximation will be poor. An

    approximation will also be poor if only a few grains are randomly dropped into the whole

    square. Thus, the approximation of will become more accurate both as the grains are

    dropped more uniformly and as more are dropped.

  • 7/27/2019 Second Rev w

    3/15

    What is MonteCarlo Simulation?

    The Monte Carlo method was invented by scientists working on the atomic

    bomb in the 1940s, who named it for the city in Monaco famed for its casinos and games

    of chance. Its core idea is to use random samples of parameters or inputs to explore thebehavior of a complex system or process. The scientists faced physics problems, such as

    models of neutron diffusion, that were too complex for an analytical solution -- so they

    had to be evaluated numerically. They had access to one of the earliest computers --

    MANIAC -- but their models involved so many dimensions that exhaustive numerical

    evaluation was prohibitively slow. Monte Carlo simulation proved to be surprisingly

    effective at finding solutions to these problems. Since that time, Monte Carlo methods

    have been applied to an incredibly diverse range of problems in science, engineering, and

    finance -- and business applications in virtually every industry.

    WhyShouldI Use MonteCarlo Simulation?

    Most business activities, plans and processes are too complex for an analytical solution --

    just like the physics problems of the 1940s. But you can build a spreadsheet model that

    lets you evaluate your plan numerically -- you can change numbers, ask 'what if' and see

    the results. This is straightforward if you have just one or two parameters to explore.

    But many business situations involve uncertainty in many dimensions -- for example,

    variable market demand, unknown plans of competitors, uncertainty in costs, and many

    others -- just like the physics problems in the 1940s. If your situation sounds like this,

    you may find that the Monte Carlo method is surprisingly effective for you as well.

    EXAMPLE:

    Computer simulation has to do with using computer models to imitate real life or

    make predictions. When you create a model with a spreadsheet like Excel, you have acertain number of input parameters and a few equations that use those inputs to give you

    a set of outputs (or response variables). This type of model is usually deterministic,

    meaning that you get the same results no matter how many times you re-calculate.

    [ Example 1: A Deterministic Model for Compound Interest ]

  • 7/27/2019 Second Rev w

    4/15

    Figure 1: A parametric deterministic model maps a set of input variables to a set of output variables.

    Monte Carlo simulation is a method for iteratively evaluating a deterministic model using

    sets of random numbers as inputs. This method is often used when the model is complex,

    nonlinear, or involves more than just a couple uncertain parameters. A simulation can

    typically involve over 10,000 evaluations of the model, a task which in the past was only

    practical using super computers.

    By using random inputs, you are essentially turning the deterministic model into a

    stochastic model. Example 2 demonstrates this concept with a very simple problem.

    In Example 2, we used simple uniform random numbers as the inputs to the model.

    However, a uniform distribution is not the only way to represent uncertainty. Before

    describing the steps of the general MC simulation in detail, a little word about uncertainty

    propagation:

    The Monte Carlo method is just one of many methods for analyzing uncertainty

    propagation, where the goal is to determine how random variation, lack of knowledge, or

    error affects the sensitivity, performance, or reliability of the system that is being

    modeled. Monte Carlo simulation is categorized as a sampling method because the inputs

    are randomly generated from probability distributions to simulate the process of sampling

    from an actual population. So, we try to choose a distribution for the inputs that most

    closely matches data we already have, or best represents our current state of knowledge.

    The data generated from the simulation can be represented as probability distributions (or

    histograms) or converted to error bars, reliability predictions, tolerance zones, and

    confidence intervals. (See Figure 2).

  • 7/27/2019 Second Rev w

    5/15

    Uncertainty Propagation

    Figure 2: Schematic showing the principal of stochastic uncertainty propagation. (The basic principle behind Monte Carlo simulation.)

    If you have made it this far, congratulations! Now for the fun part! The steps in Monte

    Carlo simulation corresponding to the uncertainty propagation shown in Figure 2 are

    fairly simple, and can be easily implemented in Excel for simple models. All we need to

    do is follow the five simple steps listed below:

    Step 1: Create a parametric model, y = f(x1, x2, ..., xq).

    Step 2: Generate a set of random inputs, xi1, xi2, ..., xiq.

    Step 3: Evaluate the model and store the results as yi.

    Step 4: Repeat steps 2 and 3 for i = 1 to n.

    Step 5: Analyze the results using histograms, summary statistics, confidence intervals,

    etc.

  • 7/27/2019 Second Rev w

    6/15

    Some Advantages of Simulation

    Often the only type of model possible for complex systems Analytical models frequently infeasible

    Process of building simulation can clarify understanding of real system

    Sometimes more useful than actual application of final simulation

    Allows for sensitivity analysis and optimization of real system without need to

    operate real system

    Can maintain better control over experimental conditions than real system

    Time compression/expansion: Can evaluate system on slower or faster time

    scale than real system

    Some Disadvantages of Simulation

    May be very expensive and time consuming to build simulation

    Easy to misuse simulation by stretching it beyond the limits of credibility

    Problem especially apparent when using commercial simulation packages

    due to ease of use and lack of familiarity with underlying assumptions and

    restrictions

    Slick graphics, animation, tables, etc. may tempt user to assign

    unwarranted credibility to output

    Monte Carlo simulation usually requires several (perhaps many) runs at given

    input values

  • 7/27/2019 Second Rev w

    7/15

    Contrast: analytical solution provides exact values

    WHEN TO USE THE SEMIANALYTIC TECHNIQUE

    The semi analytic technique works well for certain types of communication

    systems, but not for others. The semi analytic technique is applicable if a system

    has all of these characteristics:

    Any effects of multi path fading, quantization, and amplifier nonlinearities must

    precede the effects of noise in the actual channel being modeled.

    The receiver is perfectly synchronized with the carrier, and timing jitter is

    negligible. Because phase noise and timing jitter are slow processes, they reducethe applicability of the semi analytic technique to a communication system.

    The noiseless simulation has no errors in the received signal constellation.

    Distortions from sources other than noise should be mild enough to keep each

    signal point in its correct decision region. If this is not the case, then the

    calculated BER will be too low. For instance, if the modeled system has a phase

    rotation that places the received signal points outside their proper decision

    regions, then the semianalytic technique is not suitable to predict system

    performance.

  • 7/27/2019 Second Rev w

    8/15

    PROCEDURE FOR THE SEMIANALYTIC TECHNIQUE

    The procedure below describes how you would typically implement the semi analytictechnique using the semi analytic function:

    Generate a message signal containing at least ML symbols, where M is the alphabet size

    of the modulation and L is the length of the impulse response of the channel, in symbols.

    A common approach is to start with an augmented binary pseudo noise (PN) sequence of

    total length (log2M)ML. An augmented PN sequence is a PN sequence with an extra zero

    appended, which makes the distribution of ones and zeros equal.

    Modulate a carrier with the message signal using base band modulation. Supported

    modulation types are listed on the reference page for semi analytic. Shape the resultant

    signal with rectangular pulse shaping, using the oversampling factor that you will later

    use to filter the modulated signal. Store the result of this step as txsig for later use.

    Filter the modulated signal with a transmit filter. This filter is often a square-root raised

    cosine filter, but you can also use a Butterworth, Bessel, Chebyshev type 1 or 2, elliptic,

    or more general FIR or IIR filter. If you use a square-root raised cosine filter, use it on the

    non-oversampled modulated signal and specify the oversampling factor in the filtering

    function. If you use another filter type, you can apply it to the rectangular pulse shaped

    signal.

  • 7/27/2019 Second Rev w

    9/15

    Run the filtered signal through a noiseless channel. This channel can include multipath

    fading effects, phase shifts, amplifier nonlinearities, quantization, and additional filtering,

    but it must not include noise. Store the result of this step as rxsig for later use.

    Invoke the semi analytic function using the txsig and rxsig data from earlier steps.

    Specify a receive filter as a pair of input arguments, unless you want to use the function's

    default filter. The function filters rxsig and then determines the error probability of each

    received signal point by analytically applying the Gaussian noise distribution to each

    point. The function averages the error probabilities over the entire received signal to

    determine the overall error probability. If the error probability calculated in this way is a

    symbol error probability, then the function converts it to a bit error rate, typically by

    assuming Gray coding. The function returns the bit error rate (or, in the case of DQPSK

    modulation, an upper bound on the bit error rate).

    PHASE-SHIFT KEYING

    Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing,

    or modulating, the phase of a reference signal (the carrier wave).

    Any digital modulation scheme uses a finite number of distinct signals to represent digital

    data. PSK uses a finite number of phases, each assigned a unique pattern of binary bits.

    Usually, each phase encodes an equal number of bits. Each pattern of bits forms the

    symbol that is represented by the particular phase. The demodulator, which is designed

    specifically for the symbol-set used by the modulator, determines the phase of the

    received signal and maps it back to the symbol it represents, thus recovering the original

    data. This requires the receiver to be able to compare the phase of the received signal to a

    reference signal such a system is termed coherent (and referred to as CPSK).

    Alternatively, instead of using the bit patterns to setthe phase of the wave, it can instead

    be used to change it by a specified amount. The demodulator then determines the

    changes in the phase of the received signal rather than the phase itself. Since this scheme

  • 7/27/2019 Second Rev w

    10/15

    depends on the difference between successive phases, it is termed differential phase-

    shift keying (DPSK). DPSK can be significantly simpler to implement than ordinary

    PSK since there is no need for the demodulator to have a copy of the reference signal to

    determine the exact phase of the received signal (it is a non-coherent scheme). In

    exchange, it produces more erroneous demodulations. The exact requirements of the

    particular scenario under consideration determine which scheme is used.

    There are three major classes of digital modulation techniques used for transmission of

    digitally represented data:

    Amplitude-shift keying (ASK)

    Frequency-shift keying (FSK)

    Phase-shift keying (PSK)

    All convey data by changing some aspect of a base signal, the carrier wave (usually a

    sinusoid), in response to a data signal. In the case of PSK, the phase is changed to

    represent the data signal. There are two fundamental ways of utilizing the phase of a

    signal in this way:

    By viewing the phase itself as conveying the information, in which case the

    demodulator must have a reference signal to compare the received signal's phase

    against; or

    By viewing the change in the phase as conveying information differential

    schemes, some of which do not need a reference carrier (to a certain extent).

    A convenient way to represent PSK schemes is on a constellation diagram. This shows

    the points in the Argand plane where, in this context, the real and imaginary axes are

    termed the in-phase and quadrature axes respectively due to their 90 separation. Such arepresentation on perpendicular axes lends itself to straightforward implementation. The

    amplitude of each point along the in-phase axis is used to modulate a cosine (or sine)

    wave and the amplitude along the quadrature axis to modulate a sine (or cosine) wave.

  • 7/27/2019 Second Rev w

    11/15

    In PSK, the constellation points chosen are usually positioned with uniform angular

    spacing around a circle. This gives maximum phase-separation between adjacent points

    and thus the best immunity to corruption. They are positioned on a circle so that they can

    all be transmitted with the same energy. In this way, the moduli of the complex numbers

    they represent will be the same and thus so will the amplitudes needed for the cosine and

    sine waves. Two common examples are "binary phase-shift keying" (BPSK) which uses

    two phases, and "quadrature phase-shift keying" (QPSK) which uses four phases,

    although any number of phases may be used. Since the data to be conveyed are usually

    binary, the PSK scheme is usually designed with the number of constellation points being

    a power of 2.

    Definitions

    For determining error-rates mathematically, some definitions will be needed:

    Eb = Energy-per-bit

    Es = Energy-per-symbol = kEb with kbits per symbol

    Tb = Bit duration

    Ts = Symbol duration N0 / 2 = Noise power spectral density (W/Hz)

    Pb = Probability of bit-error

    Ps = Probability of symbol-error

    Q(x) will give the probability that a single sample taken from a random process with

    zero-mean and unit-variance Gaussian probability density function will be greater or

    equal tox. It is a scaled form of the complementary Gaussian error function:

    CONSTELLATION DIAGRAM

  • 7/27/2019 Second Rev w

    12/15

    A constellation diagram is a representation of a signal modulated by a digital

    modulation scheme such as quadrature amplitude modulation or phase-shift

    keying. It displays the signal as a two-dimensional scatter diagram in the complex

    plane at symbol sampling instants. In a more abstract sense, it represents the

    possible symbols that may be selected by a given modulation scheme as points in

    the complex plane. Measured constellation diagrams can be used to recognize the

    type of interference and distortion in a signal.

    By representing a transmitted symbol as a complex number and modulating a

    cosine and sine carrier signal with the real and imaginary parts (respectively), the

    symbol can be sent with two carriers on the same frequency. They are often

    referred to as quadrature carriers. A coherent detector is able to independently

    demodulate these carriers. This principle of using two independently modulated

    carriers is the foundation of quadrature modulation. In pure phase modulation, the

    phase of the modulating symbol is the phase of the carrier itself.

    As the symbols are represented as complex numbers, they can be visualized as

    points on the complex plane. The real and imaginary axes are often called the in

    phase, or I-axis and the quadrature, or Q-axis. Plotting several symbols in a scatter

    diagram produces the constellation diagram. The points on a constellation

    diagram are called constellation points. They are a set of modulation symbolswhich comprise the modulation alphabet.

    Also a diagram of the ideal positions, signal space diagram, in a modulation

    scheme can be called a constellation diagram. In this sense the constellation is not

    a scatter diagram but a representation of the scheme itself. The example shown

    here is for 8-PSK, which has also been given a Gray coded bit assignment.

    PSK CONSTELLATION WITH ISI AND AWGN

    The M signal waveforms for ideal M-ary PSK are represented As sm(t) =

    g(t)cos(ct + m) m = 0, 1, ...,M 1 where g(t) is a pulse waveform used to shape

    the spectrum of the transmitted signal and m is the information-bearing phase

    angle which takes M possible values

    m =(2m+ 1)

    http://en.wikipedia.org/w/index.php?title=Coherent_detector&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Coherent_detector&action=edit&redlink=1
  • 7/27/2019 Second Rev w

    13/15

    m = 0, 1, ...,M 1

    If an ideal PSK signal is optimally demodulated, then using complex phasor

    notation, each of the complex decision variables takes one of the following M

    values

    Sm = ejm m = 0, 1, ...,M 1

    where is the energy of the spectrum shaping pulse and is given in terms of bit

    energy Eb by = log2 (M)Eb. shows the ideal symbol locations Sm and decision

    regions Rm for 8PSK.

    When distortions due to channel effects or modem imperfections are present,

    the received decision variables will differ from the M ideal points, and their

    locations will be data dependent due to ISI. In this context, ISI will refer to the

    effects of both linear and non-linear time invariant distortions with memory.

    Assuming equiprobable symbols, then in order to completely characterize the ISI

    of a channel with L symbol periods of memory, it is sufficient to consider all

    possible

    sequences of L symbols. A maximal length pseudorandom ML symbol sequence

    will satisfy this property. For M=2, linear feedback shift registers can be used to

    generate maxima length pseudorandom bit sequences.

    For M>2, efficient methods for generating maximal length pseudorandom symbol

    sequences have also been proposed. With the addition of cyclic prefixes and

    postfixes of L symbols each, a simulation using one cycle of a ML length

    pseudorandom symbol sequence is sufficient to emulate equiprobable data

    symbols.

    Therefore, performing a simulation using ML +2L symbols from a maximallength ML symbol sequence, and discarding the first and last L demodulated and

    detected decision variable points, the resulting ML decision variable points will

    completely characterize the effect of the system ISI on the signal.

    This set of decision variables can be defined in terms of their respective

    magnitudes and phases or in-phas and quadrature components

  • 7/27/2019 Second Rev w

    14/15

    sk = rkejk = ik + jqk k = 0, 1, ...,ML 1 When AWGN is present at the receiver

    input, the decision variables are

    yk = sk + nk k = 0, 1, ...,ML 1

    For a receiver having an arbitrary discrete time detection filter with impulse response

    h(n), the noise component nk at the filter output is a sequence of complex Gaussian

    distributed random variables

  • 7/27/2019 Second Rev w

    15/15

    OUR MATLAB SIMULATION

    -2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    2