p2 digital logic- notes by l tarassenko (lecturer...

118
1 P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien) Extended Syllabus 2 Suggested Reading 2 Lecture A – Logical functions and logic gates 3 Introduction and comments 3 Binary logic 4 Implementation of a logic function using an electrical circuit 5 Digital electronic circuits 6 MOSFET transistors as switches 6 Overview of the basic principles of operation of the MOSFET 8 Basic logic gates and truth tables 12 Basic laws of Boolean Algebra 15 Universal gates 17 CMOS implementation of digital logic 19 Transmission Gate 22 Real integrated circuit gates 23

Upload: others

Post on 01-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

1

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)

Extended Syllabus 2

Suggested Reading 2

Lecture A – Logical functions and logic gates 3

Introduction and comments 3

Binary logic 4

Implementation of a logic function using an electrical circuit 5

Digital electronic circuits 6

MOSFET transistors as switches 6

Overview of the basic principles of operation of the MOSFET 8

Basic logic gates and truth tables 12

Basic laws of Boolean Algebra 15

Universal gates 17

CMOS implementation of digital logic 19

Transmission Gate 22

Real integrated circuit gates 23

Page 2: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

2

Digital Logic - L. Tarassenko

Extended Syllabus

Basic gates, truth tables, combinational functions (AND, OR, NOT, EX-

OR). The MOSFET: basic principles of operation; the MOSFET as a

switch; CMOS inverter, NOR and NAND gates. Karnaugh maps;

algebraic laws (such as distribution and association). Binary codes.

Binary arithmetic: adders/subtractors, fixed- and floating-point

representations. Multiplexers, ROM’s and PLA’s. Sequential logic I:

D-type flip-flops, registers, asynchronous counters. Sequential logic II:

synchronous counters, Karnaugh transition maps. Introduction to the

concept of state machines.

Suggested Reading

The following books have been used in the preparation of these lecture

notes. Hill and Peterson has been around for a number of years, but is

becoming outdated in style. The book by Green gives a more applied

feel to the subject with some nice examples. The Mano & Kime text is

now being used for the follow-up lectures in the core course. Please

remember that these notes can only be a summary of what is available

in textbooks.

1. Digital Logic and Microprocessors, Hill and Peterson, Wiley (ISBN

047182979X)

2. Applied Digital Electronics, D.C. Green, Longman (ISBN

0582356326)

3. The Essence of Digital Design, B. Wilkinson (ISBN 0135701104)

4. Logic and Computer Design Fundamentals, M.M. Mano and C.R.

Kime, Pearson (ISBN 0131911651)

Page 3: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

3

Lecture A – Logical functions and logic gates

Introduction and comments

Rules are necessary. One of the aims of this course is to describe the

rules for basic logic design, such that you will be able to appreciate the

essential features of complex digital systems. Nowadays the design of

microprocessors, memory chips, specific integrated circuits, digital signal

processing circuits, etc. has reached such a level of complexity that it

requires intensive use of computers.

In fact, with computer optimisation algorithms being applied to large

circuit systems we are rapidly reaching the point when computers will

design the next generation of computers. Already circuit diagrams are

old-fashioned and nearly all circuit designs are produced using computer

languages such as VHDL. These languages are often referred to as

“silicon compilers”. Software programmers define the process and

algorithm to be implemented in a code which is then, after compilation

and error checking, sent to a “silicon vendor” to turn it into a silicon

integrated circuit.

Nonetheless, it is crucial to understand the basics, because ultimately

every digital computer is a (very large) collection of simple logic gates

which manipulates bits of information under the control of other logic

gates.

The digital computer is a general-purpose machine. (In the 60’s,

analogue computers enjoyed some success, but in the end digital

Page 4: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

4

computing triumphed over analogue computing.) Digital computers use

the binary number system to represent information. A binary digit (0 or 1)

is called a bit. Groups of bits specify to the computer the instructions to

be executed and the data to be processed.

Digital circuits are the hardware components which manipulate the bits.

The circuits are implemented using transistors and interconnections in

complex semiconductor devices called integrated circuits. Each basic

circuit is referred to as a logic gate.

The course is divided into two sections. Part I deals exclusively with

combinational circuits, i.e. collections of logic gates producing an output

in combination, whereas Part II extends this and introduces sequential

circuits operating in discrete time chunks. This leads to the concept of a

state machine (the basic building block of computers).

Binary logic

Binary logic deals with binary variables, and with the operations of

mathematical logic applied to these variables. Boolean algebra (named

after the English mathematician George Boole) describes how to perform

logical operations in a binary logic system. Boolean algebra has only

three operators: NOT, AND or OR.

Before we introduce the implementation of these Boolean operators

using digital electronic circuits, let us consider the implementation of a

simple logic function using an electrical circuit to switch a lamp on or off

depending on the position of two switches.

Page 5: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

5

Implementation of a logic function using an electrical circuit

In the electrical circuit below it can be seen that no current is reaching

the relay as switch B is open. The relay is shown in its off position and

the lamp should be lit. If switch B is closed then the relay arm is pushed

out, and the lamp circuit broken. The light switches off.

We can build up a simple logic or Boolean expression to represent cases

when the lamp is either ON or OFF. Below is a simple Truth Table

description that illustrates all cases. Truth tables are a very helpful way

of considering all possibilities arising in a logic system (i.e. a description

of the output(s) of the system for all possible inputs).

Switch A Switch B Lamp state

Open Open ON

Open Closed ON

Closed Open ON

Closed Closed OFF

Page 6: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

6

Digital electronic circuits

In these transistor circuits, logic states are represented by voltage values

at the inputs and outputs of the circuits.

In the early days (1960’s) of transistor-transistor logic (TTL), the voltage

levels given in the table below were assigned to the logic 0 and logic 1

states. Note, however, that integrated circuit design has driven these

voltages smaller in order to reduce the effects of power consumption as

the transistor density increases. It is quite common these days to have

logic 1 states represented by voltages that are lower than 2 volts.

Voltage level Logic Value

0V FALSE, logic 0 5 V TRUE, logic 1 state

MOSFET transistors as switches

[Wilkinson Section 2.6]

The transistors used nowadays for logic circuits are based on CMOS

designs (Complementary Metal-Oxide-Semiconductor Field Effect

Transistors). To represent a logic level, two transistors are used in a

Page 7: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

7

configuration that is very similar to the switching circuit shown earlier –

see the circuit below as an example.

There is a switch at the top of the circuit and one at the bottom. The

output is taken from the middle point. Normally one switch is open

whilst the other is closed. There is a special case, which we will return to

later, when both switches are open and the output is effectively in a high-

impedance state.

To get a high (logic 1) output the top switch needs to be closed, whereas

a logic 0 output is obtained when the bottom switch is closed. The case

when both switches are closed is not considered, as this would short-

circuit the supply rails. The actual transistor elements are given in the

diagram on the next page. The term complementary in CMOS refers to

the fact that the top transistor has a conducting channel that relies on

hole transport (i.e. p-type conduction), whereas the bottom transistor is

normally an n-type MOSFET (relies on electrons).

The two working together like this offer a number of advantages, the

main feature being that power is only consumed by the transistors during

a switch transition. If the transistors maintain a stable output (logic 0 or

logic 1) then no power is being consumed. This is an important point, as

Page 8: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

8

thermal management is a considerable problem in the design of

microprocessors such as the Pentium.

For an n-type MOSFET the channel is conducting when its Gate is at a

logic 1 level. The opposite is true for the p-type device.

Overview of the basic principles of operation of the MOSFET

How do n-channel and p-channel MOSFETs work? How does the

voltage applied to the gate electrode control the flow of current between

the drain and source contacts (i.e. the drain-to-source resistance)?

Page 9: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

9

The figure below gives a simple interpretation of what is happening in

both the n and p channel devices as the gate voltage is switched

between the logic 0 and logic 1 states (i.e. 0 to 5 volts). These lecture

notes will only consider a simple explanation for the behaviour of

MOSFET devices, based on electro-static arguments. The situation in

practice is more complicated and beyond the syllabus of this course.

In both devices the gate is a conducting track, such as a metal,

evaporated on top of an insulating layer. The gate dimension for a

PENTIUM is typically less than 180 nm. The insulator is often an oxide

of silicon and is no more than 3 nm thick! Either side of the gate dopants

are the drain and source regions. These are formed by diffusing

impurities into the front surface of the silicon. Electrical contacts are

made to the gate, drain and source in order to run the transistor. Note

that the n-channel MOSFET is made from p-type silicon, with n-diffused

Page 10: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

10

drain and source region (n-type silicon, with p-diffused drain and source

region for the p-channel MOSFET).

Let us consider the drain-to-source resistance for the n-MOSFET

transistor first. In the cross-sectional view notice that the drain and

source diffusions form a diode structure (p-n junction) with the bulk of the

silicon region. With 0 volts applied to the gate (top left diagram) current

tries to flow from the drain to the source, but cannot because the diode

structure is reversed biased and offers a high resistance. The MOSFET

is described as being in cut-off.

When 5 volts is applied to the gate, making it positively charged, the

available electrons in the vicinity of the gate are attracted by an

electrostatic force to the space immediately beneath the gate, creating a

thin “channel” of electrons. These electrons form a continuous region of

“n-type” semiconductor that allows current to flow such that the

resistance between the drain and source is negligible. The space

immediately beneath the gate is inverted from p-type to n-type, and the

current from the drain to the source is regarded as being saturated. The

value of gate voltage required to just achieve strong inversion is called

the threshold voltage, VT, and is typically about 0.2 volts. It is an

important device parameter.

For the p-channel MOSFET the doping of the bulk of the semiconductor

is such that there exists a thin layer of holes beneath the gate when no

gate voltage is applied. Under these conditions the drain-source

resistance is negligible, and the MOSFET is described as saturated.

However, when applying 5 volts to the gate, making it positively charged,

the holes are repelled by electrostatic forces and distribute into the bulk

of the semiconductor. Now the influence of the reverse-biased diodes

between the drain and the bulk of the semiconductor prevents current

Page 11: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

11

flow. The MOSFET is now in its high resistance state, or cut-off. The

operation of the p-MOSFET is complementary to that of the n-MOSFET.

MOSFET current voltage characteristics

There is a simple square-law approximation for the current between

drain and source. As we are only concerned with digital logic in this

lecture course, we only need consider the MOSFET in one of the two

states: cut-off (when no current flows) and saturation (when the drain-

source current is at its largest value).

In cut-off

The gate voltage, VG is less than the threshold voltage, VT and the

resistance between drain and source is typically about 5×109 ohms (the

intrinsic resistance in the vicinity of the gate region).

In saturation

The saturated current flowing between drain and source, IDS is given

approximately by the following square-law equation:

( )2TGDS 2VVI −=

β, where β is a parameter relating to the physical

dimensions on the MOSFET and properties of the silicon:

LW

tox

pn,0r μεεβ =

Here rε is the relative dielectric constant of the silicon, 0ε is the dielectric

permittivity of free space, pn,μ is the mobility of the relevant charge

Page 12: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

12

carrier (n or p channel), oxt is the oxide thickness, W is the width of the

gate, and L is the gate length. For typical devices, the drain-to-source

resistance, when the gate voltage for an n-channel device is at the logic

1 state, is 0.6 ohm – which is effectively a short circuit compared with the

cut-off value determined above.

Basic logic gates and truth tables

If we return to the lamp circuit and its truth table, we can see that the

lamp is lit when switch A or switch B is open. We note that logical

expressions like “A AND B”, “A OR B”, and “NOT A” are denoted

algebraically by A.B (like multiplication), A + B (like addition), and A

(overlined), respectively. The mathematical representation is therefore:

A.BBABA Lamp or .Lamp or Lamp ==+=

The same information can be conveyed using a Truth Table as follows,

assuming that an open switch corresponds to logic 0, a closed switch to

logic 1, and that the lamp being lit is a logic 1:

A B Lamp, or output C

0 0 1

0 1 1

1 0 1

1 1 0

Note that the column entries for inputs A and B are an exhaustive list of

all possibilities. This is an important convention, and helps to avoid

missing an input condition by accident.

Page 13: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

13

The diagrams below give the symbols and corresponding truth tables for

the common logical functions.

Page 14: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

14

Page 15: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

15

Basic laws of Boolean Algebra

Consider the logic function BABAC .. += , which has the following circuit

diagram and truth table:

Page 16: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

16

Note that the truth table is the same as that for the Exclusive OR gate,

even though the circuit is different. Circuits that have the same truth

table are called equivalent. In practice, circuits that are equivalent in the

logical sense may have important differences (e.g. timing), but these will

not worry us in this course.

Manipulating Boolean expressions is just the same as mathematical

algebra. There are rules, and the aim is often to simplify a complex

expression. The following basic laws are all easily demonstrated using

truth tables, but need to be learnt.

XX ≡

.. and XYYXXYYX ≡+≡+ (commutative property)

)..()..( and )()( ZYXZYXZYXZYX ≡++≡++ (associative property)

Page 17: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

17

11 ,0 ≡+≡+ XXX

XXX ≡≡ 1. ,00.

ZXYXZYX ..).( +≡+ (AND distributes over OR)

)).((. ZXYXZYX ++≡+ (OR distributes over AND)

De Morgan’s Laws (1806-1871, Professor of Mathematics, UCL)

YXYX .≡+

YXYX +≡.

Universal gates

De Morgan’s laws express a relationship between the three basic types

of gate (i.e. AND, NOT, OR). It implies that only two types of gate are

necessary, as the third can be expressed in terms of the other two. The

theorem says:

BABA .≡+

In other words, the OR function is equivalent to the NAND of NOT(A) and

NOT(B). It turns out that all logic functions can be represented using

NAND gates, and as such the NAND is sometimes referred to as the

universal gate. The diagram below is an illustration of how the basic

gates are derived from NAND gates only. This means that complex

Boolean logic expressions can be implemented using NAND gates

alone, which offers considerable advantages in terms of electronic

design.

Page 18: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

18

In a similar fashion, instead of using NAND gates, we could use NOR

gates as the universal gate and build up all logic functions in terms of

NOR gates only. The common switch between NOR and NAND

functions is called DUALITY. A gate that performs a NAND operation in

positive logic representation performs the NOR operation in negative

representation and vice versa. Again, this can be proved by inspection

of the relevant truth tables.

The distinction between positive and negative logic can be confusing. All

it means is that for any design to be converted from positive to negative

logic all that is required is to take the logic 1 and 0 levels and convert

them to logic 0 and 1 values, respectively.

The two gates below are equivalent, with the circles at the input to the

OR gates referring to the negative (or NOT) of the input logic.

Page 19: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

19

CMOS implementation of digital logic

[Hill and Peterson Chapter 5]

So far we have only looked at schematic symbols for the gates we have

been considering, but of course these symbols represent electronic

devices that behave in a characteristic way due to resistance and

capacitance (and to a much lesser extent inductance). In this section

we will examine how gates are made up of the CMOS transistor

elements we have already discussed.

The next diagram shows circuit diagrams for CMOS NAND and NOR

gates. Note that the basic function of CMOS logic is a NOT gate, in the

sense that the very simplest circuit we could obtain would be an inverter

comprising of a single input and two MOSFETs.

All other functions are obtained by placing more transistors either in

parallel or in series. The NAND and NOR gates have two input

connections (A and B in this case) and a single output. The inputs are

divided between the n and p type transistors. If the bottom gates are in

series, then typically the top gates are in parallel, and vice versa.

Consider the NAND gate:

Page 20: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

20

i. suppose that the inputs (A and B) are both logic 1 states. Then the

bottom transistors are conducting whereas both the top p-type

transistors are open-circuit. The output (C) is therefore connected

to the 0 volts supply line (logic 0).

ii. suppose now that either or both of the inputs are logic 0. Then the

output is connected to the VDD supply rail and is represented as the

logic 1 state.

The truth table for (i) and (ii) corresponds to the NAND functionality.

It is left as an exercise for the reader to prove the NOR functionality for

the second gate using similar arguments.

Page 21: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

21

More complicated circuits can be built up by adding more transistors and

by cascading CMOS sections. The diagram below is a typical example,

with a single inverter CMOS element as the output stage.

Page 22: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

22

Transmission Gate

There are of course many other implementations of CMOS gates.

Sometimes, the flow of data (i.e. a stream of logic values) needs to be

controlled with something similar to a standard open or shut switch. The

next figure shows the correct circuit for a simple CMOS switch.

Surprisingly we need two transistors in a configuration that is called a

CMOS Transmission Gate. It is exactly equivalent to a simple switch

with a single gate control, such as a logic 0 opens the gate, and logic 1

closes it. Note that the Gate Control signal is inverted for the p-channel

MOSFETs.

The operation is as follows; when the control signal is a logic 0 the

bottom MOSFET is OFF (i.e. open circuit) and the top MOSFET receives

a logic 1 signal as the gate voltage. It is therefore OFF as well. Overall,

we see that there is no connection between the Data IN and Data OUT

sections of the circuit and the switch is considered to be open (as in the

illustration on the right). However, if the control signal is a logic 1, then

the bottom MOSFET is ON and conducting. Likewise the top MOSFET

Page 23: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

23

is conducting due to the logic 0 state on its gate contact. The switch is

now closed and data passes from the input to the output.

Both gates are necessary to ensure that the logic levels produced at the

output preserve the correct voltage levels for the logic 0 and logic 1

states. It turns out that if only one gate was used then perfect switching

of the output voltage levels would not be possible.

Real integrated circuit gates

[Hill and Peterson Chapter 5, Wilkinson Section 2.6]

In real (as opposed to ideal) circuits, there are delays in signal switching

due to capacitances. There are capacitances associated with cables,

tracks on printed circuit boards, metallisation on the surface of an

integrated circuit and due to the charge storage effects associated with

switching the gate contact on a MOSFET.

The next figure tries to illustrate this point. The A input is switched from

logic 1 to logic 0 and back again, whilst the B input remains at the logic 1

state. The influence on the output C is shown in the timing diagram

below the circuit.

The inverter has some delay due to various sources of capacitance (but

in the best case equal to the CMOS gate capacitance of the order of

picofarads) and resistance arising from the conducting tracks that

connect the circuit together. This is shown in the timing diagram by the

exponential rise in the inverter output voltage labelled as A .

Page 24: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

24

The exponential rise (and fall) in A represents the capacitive charging

(and discharging) events that are seen in simple R-C circuits. Normally

for timing diagrams this is idealised as a simple slope in the trace. The

gradient of the slope is called the slew rate (in volts per second) and a

numerical figure for this parameter is often quoted on data sheets.

Also shown on the timing diagram is the characteristic delay time for the

inverter invτ , which is typically a few nanoseconds. The output C is

shown switching in response to the changing logic state at its inputs.

The timing diagram also shows the total propagation delay, as totalτ .

Page 25: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

25

Real gates are also influenced by the flow of current. If too many gates

are connected to the output of a single NAND gate (for example), then

the voltage levels might not be able to represent the logic 1 and 0 states

accurately. This problem is characterised by a parameter called the fan

out, an integer number giving the number of gates that can be connected

to a single output before the logic levels become uncertain. The gate fan

out must therefore not be exceeded.

When designing a circuit using basic gates laid out on a printed circuit

board it is necessary to examine carefully the data sheets corresponding

to the type of CMOS chips that are available. For interest, the figure

below shows 3 types of simple logic integrated circuit chips, in a 14-pin

DIL package (DIL stands for direct in-line). Note that two pins are used

to supply power to the chips. The other pins make connections to

different logic gates, as shown. This sort of information (together with

fan out, slew rate, current consumption, etc) is necessary in the design

of a printed circuit board.

Page 26: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

26

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)LECTURE B – LOW LEVEL LOGIC DESIGN

Simplification of Boolean expressions 27

Standard forms of logic expressions 29

Karnaugh Map reduction method 31

Eliminating functions in K-maps 33

Grouping rules 35

K-maps with more variables 36

Simplification using 0’s on Karnaugh Map 36

Identifying prime implicants 37

Incomplete variables, and don’t cares 38

A circuit design example 39

Page 27: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

27

P2 Lecture B – Low level logic design

Simplification of Boolean expressions

[Hill and Peterson Chapter 4, Wilkinson Chapter 3]

Any realisation of a complex circuit design will be influenced by cost,

either design costs (in terms of design hours) or fabrication costs (in

terms of the number of chips or the amount of silicon space taken up

within a single chip). These are important issues because all designs

have to be commercially competitive. Also, customers have to have

confidence in the designs. Circuits therefore need to be proven to work,

and consequently have to be testable. There is a whole research area

devoted to testing complex integrated circuit designs and there are

specific rules and theorems that can be applied. Although design for

testability is well beyond the limits of this course, the same motivation

underpins the methods for reducing a complex design to its simplest

form, namely the desire to prove that a circuit will work as expected.

When faced with a Boolean expression there are several techniques that

can be employed for simplification, such as using the simple identities

from the first lecture to rewrite the equations. Below are a few more

identities, which it is advisable to learn.

Redundancy Theorem

XYXYXX ≡+=+ )1.(.

Race Hazard Theorem

ZXYXZXZYYX ..... +≡++

Page 28: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

28

Special Case of Race Hazard Theorem

YYXXYXYX ≡+=+ ).(..

(You will come across the topic of hazards later in the course.)

Consider the following example of Boolean logic simplification by

algebra.

DCBACBACBACADCBAh ........),,,( +++=

{ } DCBACBACBACA ........ +++⇒

{ } DCBACBACBAA .....)..( +++⇒

{ } DCBACBACA ...... ++⇒ ,

using the Redundancy theorem ABAA ≡+ ).(

{ } DCBACBABCA .....)1.(. +++⇒ , using XYX =+ )1.(

DCBACBACBACA ........ +++⇒

DCBACBAACA ....).(. +++⇒

DCBACBCA ..... ++⇒ , using the Race Hazard theorem.

DCBADACBCA ...).1.(.. +++⇒ , again using XYX =+ )1.(

DCBADCBACBCA ........ +++⇒

….. and finally:

DBACBCA .... ++= , representing the Boolean function in its

simplest form.

Taking this approach requires some skill (and practice). Later in this

lecture, we will look at an alternative technique that relies on using a

graphical method to achieve the same result.

Page 29: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

29

Equivalence re-visited

Two functions, f and g are said to be equivalent if their exhaustive truth

tables lead to identical entries. For example take the two functions:

baf += and bababag ... ++=

We can prove that these are identical (sometimes written as gf ⇔ )

using truth tables as follows:

a b f ba. ba. ba. g

0 0 1 0 0 1 1

0 1 1 1 0 0 1

1 0 0 0 0 0 0

1 1 1 0 1 0 1

When comparing the column entries for both f and g we obtain the

surprising result that the expressions are equivalent, such that gf ⇔ .

Standard forms of logic expressions

[Hill and Peterson Chapter 4, Wilkinson Section 3.2]

There is a standard form of expressing Boolean functions that has the

Boolean variables connected with AND operators to form terms that are

summed with OR operators. For example, consider the following

standard form expression:

BACBACACBAf ....),,( ++=

Page 30: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

30

Here the three variables are arranged into what are called PRODUCT

terms formed by the AND operator, and these product terms are then

summed using the OR operator. This standard form is called the SUM-

OF-PRODUCTS form and is very popular in Boolean expressions. It can

be extended to form what is called the canonical sum-of-products term

by expanding the expression to include all variables (or their

complement) in each of the product terms. For example, the next

expression is in canonical form.

CBACBACBACBACBAg ........),,( +++=

The product terms in the canonical sum-of-products expression are

called minterms, and the variables are called literals. The minterms can

be obtained either by using Boolean operations to expand an arbitrary

function into its canonical form, or by using a truth table. Below is the

truth table for the function g(A,B,C):

Page 31: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

31

Each row, when g(A,B,C) is equal to logic 1, generates a new minterm,

as this is when the OR operator would make g = 1 (or true). In this way

the canonical sum-of-products form can easily be derived.

There is another standard form called the product-of-sums form, which

has the variables summed using the OR operator and each term

connected by the AND operator. This time the function f is in the product-

of-sums form:

)BAC).(A).(CB(Af(A,B,C) ++++=

Similarly, the function g is now given below in canonical product-of-sums

form, with each sum term referred to as a maxterm:

)).().().((),,( CBACBACBACBACBAg ++++++++=

Each of the maxterms in the canonical product-of-sums form can easily

be obtained from the truth table. For each entry when g = 0 take the

complement of the variables, and OR them together to form the

corresponding maxterm. This inspection method is really a quick way of

applying de Morgan’s theorem.

Karnaugh Map reduction method

[Hill and Peterson Section 4.3, Wilkinson Section 3.4.2]

So far we have attempted to reduce a complex Boolean expression into

its simplest form using algebraic techniques. The simplest realisation of

an expression is important to obtain, as this often represents the most

cost effective electronic circuit design (i.e. the least number of gates).

However, algebraic manipulation requires a lot of skill and patience. A

Page 32: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

32

very popular graphical method can be employed for a low number of

inputs, called Karnaugh Map minimisation. Note that for larger more

complicated designs, circuit designers use more sophisticated software

design tools (such as VHDL compilers), but this is beyond the scope of

the course.

The Karnaugh Map consists of a two-dimensional grid, with each

element in the grid corresponding to one minterm in the canonical form

of the sum-of-products expression. The arrangement of the minterms,

either horizontally or vertically, allows for adjacent elements in the grid to

differ only by one variable (which is complemented). This is the key to

how the Karnaugh map works and is achieved by labelling the grid

according to a binary code called the Gray code (see later) which only

differs by one bit in the code sequence.

The diagram below is an illustration of how the Karnaugh Map is

constructed. Each box has a unique code, for example the top-corner-

right box has the code 010 for CBA . As a further point, notice that the

top row corresponds to the case that A = 0 throughout.

Page 33: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

33

Two types of labelling are used, with both given in the above diagram for

illustration purposes. Only one labelling scheme is used in practice. The

binary numbering around the edges is the Gray code labelling scheme.

The brackets represent a reduced labelling format that shows over which

squares the variable indicated is equal to a logic one – it is an exercise

for the reader to confirm this. The two alternative types of labelling are

given in the next diagram for the function: CBACBACBAf ...... ++=

In the diagram we enter in to each box the logic value of the overall

function, f(A,B,C) corresponding to the values of each of the variables.

So for example, f = 1, when ABC = 000, which is the top-left-corner box

entry. Now check the other boxes!

Eliminating functions in Karnaugh maps

Let us consider some more examples of Karnaugh Maps. Either

labelling scheme is fine – you can adopt the one you prefer, but for now

both are shown. On the next page is a new map, corresponding to the

function CBACBACBAf ...... ++= .

Page 34: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

34

We notice that the top left-most two boxes both have logic 1 entries.

This means that the function gives a logic 1 result when ABC = 000

(corresponding to CBA .. ) and when ABC = 001 (corresponding to CBA .. ).

Remember that adjacent boxes have been arranged so that only one

variable changes to its complement. We draw a loop around both these

1’s to indicate that they are neighbours; we call this grouping. This

group shows that the variable C changes from a C to a C and yet the

function f remains at a logic 1 state. In other words C does not make

any difference to the function result when AB = 00. In which case C is

eliminated from that particular minterm.

We can also group the single logic 1 shown in the bottom-right-corner

box, but in this case there are no adjacent boxes containing a logic 1.

Once we have finished grouping we can write down the minterms that

have been identified. The first grouping showed that the two minterms

CBA .. and CBA .. can be reduced by elimination to BA. , whereas the last

minterm cannot be reduced.

Overall, using this graphical method, we find that a simplified function for

f is: CBABAf ... += . The equivalence can be proved using a truth table.

Page 35: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

35

Grouping rules

Entries in the map can be grouped in numbers of 1, 2, 4, 8, … (i.e. in

powers of 2). As boxes around the perimeter still only have one variable

changing at a time, it is possible to group around the edges of the map,

as illustrated in the example below:

Further examples of different sorts of grouping are shown in the diagram

below. It is important to enclose as large a group as possible to

achieve the best simplification. The corresponding simplified functions

are given for each case, by considering which variables change logic

state without changing the value of the overall function.

Page 36: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

36

Karnaugh maps with more variables

The Karnaugh Map approach can be applied as a simplification tool for

any number of variables, but it soon becomes cumbersome for more

than 5 variables and different software tools are normally employed in

such circumstances. Below is an example of a function with 4 variables,

with the original function in canonical sum-of-products form given on the

right of the map. Grouping has identified the simplified expression:

DBADCDAf .... ++=

Simplification using 0’s on Karnaugh Map

Sometimes, once the map has been filled in, it becomes apparent that a

simpler representation of the function can be achieved by grouping the

0’s, instead of the 1’s. This is illustrated below, for the same function

showing alternative groupings. Grouping the 1’s gives a function that is

TRUE under certain conditions, whereas grouping the 0’s provides a

simplification that indicates when the function is FALSE. Deciding which

to adopt depends on the choice of gates available for the design, but in

practice grouping 1’s is more common.

Page 37: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

37

Identifying prime implicants

There are further conventions that have been developed in order to

avoid difficulties associated with the electronic realisation of the circuit

function (e.g. avoiding glitches in the output caused by timing delays –

race hazards, as they are called). The groups on the Karnaugh map are

called implicants, with the largest groups being called the prime

implicants for the function. Prime implicants that contain 1’s that cannot

be grouped in any other way are called Essential Prime Implicants, and

the terms they represent are crucial to the correct realisation of the

function.

Two alternative grouping strategies are shown in the diagram below.

The essential prime implicants have been identified in bold face. A

general strategy is to form the groups containing the essential prime

implicants first and then to group the remaining 1’s. There are options –

for example, the top-left-corner grouping is different between the two

cases shown below. This leads to a different function simplification, but

both are correct. You will notice that the essential prime implicants are

the same in both solutions.

Page 38: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

38

Incomplete variables, and don’t cares

There are cases when not all variables have been linked to an outcome!

So we are uncertain as to what the function will do for certain

combinations of input variables. This happens in designs that have not

been fully specified, and we will come across examples of this later. The

approach taken is to construct a truth table with an exhaustive list of all

input variable conditions, and mark the unspecified function outputs as

X’s (which is called a don’t care state).

The significance of this truth table entry is that we don’t care what the

output is under these conditions: it does not matter whether it is a logic 1

or logic 0 (usually because the design does not consider these cases).

Such an example is given in the next figure, showing the truth table and

the corresponding Karnaugh Map containing the don’t cares. When

grouping, we can consider an X as either a 1 or a 0, and form the groups

using the strategies explained previously.

Page 39: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

39

A circuit design example

Find a simpler form of the logic circuit shown below that creates the

same output.

Start with a truth table and develop the Karnaugh Map, then simplify the

function.

Page 40: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

40

Or by Boolean algebra: CBBBAF .+++= (working from the top of the

circuit)

Apply de Morgan’s Theorem to the first term: CBBBAF .. ++=

Factorise, making use of Redundancy theorem:

BBACBBAF +=++= .)1.(.

BABBBAABBAF +=++=++= ).()1.(. , which gives the same result.

Selecting which method to adopt for function simplification depends a lot

on your skills at algebra. The Karnaugh Map method has many

attractions due to the fact that it is graphical and relies on a strict

methodical approach, but it can be tedious for a large number of

variables. Of course, you should know both methods.

Page 41: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

41

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien) LECTURE C – BINARY NUMBER REPRESENTATION

Counting in Binary 42

Octal (8) and Hexadecimal (16) numbers 44

Dealing with negative numbers 45

2’s complement 45

Offset Binary (or Excess n) 47

Introducing Binary Codes 47

Binary Coded Decimal (BCD) 48

ASCII 48

Unit Distance Codes 49

Gray Code 49

Error Detection 51

Use of Parity 51

Some examples – for general information only 53

Page 42: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

42

Lecture C – Binary Number Representation

Counting in Binary

[Hill and Peterson Chapter 3, Wilkinson Chapter 1]

Nature gave us 10 fingers and therefore we evolved a counting system

based on units of 10. This counting system is so deeply rooted in our

education that we often think there is something special about the

number 10 – but there isn’t. In fact mathematicians throughout history

have explored other alternatives.

Using the “base 10” (or decimal, or “radix 10”) system, we make use of

the following notation for representing a number n using m+1 numbers:

5,6,7,8,90,1,2,3,4, numbers the of one is d where......dd,dn i01mm −≡

and mm

22

11

00 10d....10d10d10dn ×++×+×+×=

An example is the number 598136 - an integer. We interpret this

number as follows:

654210 105109108101103106598136 ×+×+×+×+×+×=

The same rules apply when considering a number of any radix or base,

using the general form:

radix. the is r whererd rd....rdrdrdnm

0i

mi

mm

22

11

00 ∑

=×=×++×+×+×=

Page 43: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

43

So there is nothing special about binary representation, which is radix (or

base) 2, and uses just digits 0 and 1. These numbers are ideal for logic

systems, for which each variable can take the condition of being either

TRUE (i.e. 1) or FALSE (i.e. 0). As an example, let's take the binary

number 101011, of the form: n = bmbm-1…b0 with m = 6 in this case.

Converting from binary to decimal is straightforward:

101011 = 1 x 20 + 1 x 21 + 0 x 22 + 1 x 23 + 0 x 24 + 1 x 25

= 1 + 2 + 8 + 32

= 4310

Note that the subscript included in the answer is often added to indicate

the radix, when it is not otherwise clear.

Conversion from decimal to binary is achieved by repeatedly dividing the

number by 2 and writing down the remainder at each step. Keep going

until it is no longer possible to divide, as illustrated in the figure below:

This works for any radix and so, for example, converting the decimal

number 53 to radix 6 gives 1256.

Page 44: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

44

Octal (8) and Hexadecimal (16) numbers

The binary representation of a decimal number can lead to long strings

of binary digits for large numbers. Two other bases are in common use

for this reason, base/radix 8 and base/radix 16. The latter is very useful

as groups of 4 digits are common in digital design (16 binary bits is often

called a word, 8 a byte and 4 a nibble….).

∑=

×=m

0i

ii 8dn withOctal, called is 8 Base/radix

Base/radix 16 is called Hexadecimal, with

FE,D,C,B,A,9,-0 digits using 16dn m

0i

ii∑

=×=

Notice that in hexadecimal we need to represent digits greater than 9,

hence the use of letters A (corresponding to 10) to F (corresponding to

15). Summarising the higher hexadecimal digits:

A16 ≡ 1010 = 10102

B16 ≡ 1110 = 10112

C16 ≡ 1210 = 11002

D16 ≡ 1310 = 11012

E16 ≡ 1410 = 11102

F16 ≡ 1510 = 11112

Working with octal and hexadecimal representation becomes easier

when you consider the numbers in groups of 3 or 4 binary digits. For

example:

Page 45: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

45

Binary → Octal: 11110111012 ≡ 001 111 011 101 = 17358

Octal → Binary: 2178 ≡ 010 001 111 = 100011112

Binary → Hex: 11110111012 ≡ 0011 1101 1101 = 3DD16

Hex → Binary: 1AE16 ≡ 0001 1010 1110 = 1101011102

Dealing with negative numbers

There are two common ways of representing negative numbers in the

binary representation (2’s complement and offset binary). The default

assumption is that a binary number is positive UNLESS it is known that a

negative number is being represented. If the computer architecture has

been designed to implement negative numbers, then the most significant

bit is usually assigned to be the “sign bit” and a number is negative if its

most significant bit is a “1”.

2’s complement

To understand the 2’s complement representation, consider first an

example using the 10’s complement representation. Imagine what would

happen to a car milometer set to zero if the car was driven backwards

exactly 1 mile:

99999 = 105-1 or the ten’s complement representation of –1 for 5-digit

decimal numbers (10n-1 where n = 5).

Page 46: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

46

In the 2’s complement form, a negative binary number is represented by

the number 2n-b. The number of bits used, n, is set by the length of the

binary representation employed within the computer. Usually n is

chosen to be one of 8, 16, 32 or 64 (on the increase as chip technology

improves).

For an 8 bit number, -1 is represented as 28-1 = 111111112 = FF16. For

a 16 bit number -1 is represented as 216-1 = FFFF16. As noted earlier, in

the binary representation of “signed” numbers using 2’s complement, the

number is negative if the leading digit is a 1.

Thus, in the 2’s complement form, the 4-digit binary number, 1011, is a

negative number and its value is: 24 - (1011) = 24 - 1110 = -5. Note that

+5 is 0101, whereas -5 is 1011. From this, it can be seen that there is a

very simple way to obtain the 2’s complement of a binary number, which

is to invert all the bits and then add one.

To check: 0101 (i.e. +5) → 1010 + 1 → 1011 = -5, and it works going the

other way as well.

This can be proved as follows. Note first that 2n - 1 generates a binary

number with all ones. By inspection (but you should demonstrate this in

the tutorial example), we recognise therefore that (2n - 1) - b will invert all

the bits in b – i.e. creating the one's complement of b. We therefore

have:

( ) b. of complement s2'1bb2bb12 nn =+=−⇒=−−

Page 47: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

47

Offset Binary (or Excess n)

If a constant offset, c, is added to all numbers, then we have a

representation given by:

b + c = boffset

This is similar to shifting the origin. Consider an 8-bit offset binary

representation where c = 128 (or 8016), then:

010 = 8016 offset

-12810 = 0016 offset

12910 = FF16 offset

Thus the binary number 0 is considered to represent –c and the binary

number c to represent zero. This representation is used in hardware

such as analogue-to-digital converters to represent negative as well as

positive voltages.

Introducing Binary Codes

Binary codes are used to encode binary data for a variety of reasons; for

example, ASCII codes are a very common means of communication

between a computer and its peripherals (keyboard or printer). By

introducing extra bits (redundancy) in the code, transmission errors (due

to a “noisy” channel of communication, for example) can be corrected.

Other forms of codes, used for secure data transmission, introduce

dependency rather than redundancy.

The sections that follow consider the most useful binary codes.

Page 48: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

48

Binary Coded Decimal (BCD)

[Hill & Peterson section 3.3]

Some operations are more efficient if each decimal digit is represented

individually by its binary code – despite this being an inefficient use of

memory. Binary Coded Decimal is used for storing decimal digits as in a

telephone directory, or for converting data into an appropriate form for a

decimal display (such as a seven-segment LED display).

Example:

ASCII

[Hill & Peterson section 6.3]

ASCII (American Standard Code for Information Interchange) codes are

used to represent alphanumeric characters, primarily for communication

between a computer and its keyboard or printer. The character set for

the standard ASCII code is shown below:

! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~

Code values are assigned to characters consecutively in the order in

which the characters are listed above (row-wise), starting from 32

Page 49: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

49

(assigned to the ! character) and ending up with 126 (assigned to the

tilde character ~). Positions 0 through 31 and 127 are reserved for

control codes.

Examples of ASCII code:

Upper case ‘A’ = 4116, ‘B’ = 4216, …. ‘Z’ = 5A16

Lower case ‘a’ = 6116, ‘b’ = 6216, …. ‘z’ = 7A16

Digits ‘1’ = 3116, ‘2’ = 3216, ….

Unit Distance Codes

Consider a shaft encoder connected to a motor to give position

feedback. If such an encoder uses 3 bits to represent angular position (a

resolution of 45o), consider what happens as the position crosses from

179o to 181o. The code should change from 011 to 100. However, the

manufacturers will not be able to ensure that all the lines on the encoder

are accurately aligned and there will be a short time in which all the bit

values will be uncertain and/or changing. Thus any 3 bit number could

be generated at the boundary between 011 and 100!

This problem may be avoided by using a unit-distance code (the

distance between two n-bit binary numbers is measured as the number

of binary digits that are different) in which only one bit at a time is

allowed to change (hence unit distance).

Gray Code

[Hill & Peterson section 6.3]

A popular unit-distance code is the Gray code, which is often referred to

as a reflected binary code. The code is built up by starting with the

Page 50: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

50

sequence 00, 01, 11, 10 (this is the same sequence used in Karnaugh

maps). The next most significant bit is then set and the code is reflected

(i.e. reversed) – 10, 11, 01, 00 – resulting in a 3-bit code. The fourth bit

is then set and the previous sequence reflected about this point. This is

repeated until an n-bit code is obtained.

Example – 4 bit Gray code

0000 – start

0001

0011

0010 – reflect about 3rd bit

0110

0111

0101

0100 – reflect about 4th bit

1100

1101

1111

1110

1010

1011

1001

1000

0000 – and so on …

A useful feature of the Gray code is that it is possible to convert directly

from binary into Gray code. Let gi denote the ith bit of the Gray code

representation of a binary number bnbn-1……b0.

Page 51: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

51

We can write:

gi = bi ⊕ bi+1

B.AB Aoperation OR Exclusive the is where +⊕

Error Detection

In real applications, noise and/or interference can corrupt binary data

(i.e. a ‘1’ can become a ‘0’ and vice-versa) when data is transmitted or

written to a storage medium. Error detection is particularly important for

storage media when, for example, stored data is recovered from a hard

disk or a CD ROM. There are two options for dealing with corrupted

data:

• In the simplest case, we need to know that an error has occurred.

This can be done with a parity bit (see below).

• With sophisticated encoding of the data (e.g. Hamming codes),

some errors can be corrected. A classical example is the

transmission of data from a space probe. Codes with a huge

amount of redundancy can be transmitted which will allow the very

weakly received data to be reconstructed.

Use of Parity

One way of using redundancy is to define a parity bit (usually the most

significant bit in an 8 bit byte). Even parity means that the total number

of 1’s in the 8 bits (including the parity bit) is even, whereas odd parity

means that the total number of 1’s is odd.

Page 52: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

52

Computers often communicate with devices such as printers using 7-bit

data (such as an ASCII code) and one parity bit. A system must expect

even or odd parity as the normal situation and then detect an error if the

parity bit is wrong. There is of course no way of telling which particular

bit is wrong. Furthermore, two bit errors will go undetected when using a

single parity bit for error checking.

The parity bit can be generated as follows. Consider two binary bits,

b1b0. We can determine if they represent an even or odd sum of bits by

using the Exclusive OR function.

b1 ⊕ b0 = 1 if there is only one bit set (i.e. equal to one)

= 0 if there is an even number of bits set (0 or 2)

This can be generalised to any number of bits by cascading the

Exclusive OR gates.

Example: Assuming even parity for a 4-bit binary number, how should

the decimal value 4 be represented as a binary code?

410 = 0100. The parity bit needs to be set. In hardware, the parity bit b3

would be obtained from:

b3 = b2 ⊕ b1 ⊕ b0

and so the final code would be 1100.

Page 53: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

53

Some examples – for general information only

Compact Disks

Data on a compact disk (whether playing music or retrieving computer

data) is read using a tightly focused laser beam that attempts to follow

the “tracks” of data as the disk spins. The laser must stay in focus and

not deviate across the tracks even though the disk might wobble or have

“finger prints” on the disk surface.

The reflected laser light is analysed to discover the location of dots

laying in the tracks in order to decode the binary data on the disk

surface. This process, not surprisingly, is very much error-prone, but

data encoding with sufficient redundancy to allow error correction

makes CD disk technology very robust.

TCP/IP and internet data exchange

There is a lot on the web about data transfer between computers using

TCP/IP protocols. It is a useful example of how digital codes can be

used to make a robust interface for data transmission between remote

computers. TCP stands for Transmission Control Protocol and the IP bit

stands for Internet Protocol. These communication algorithms (i.e.

protocols) were originally developed for the US Department of Defence

who needed an extremely reliable networking procedure for obvious

reasons.

TCP, when broadcasting over the internet, offers a connection oriented

transmission by establishing a connection between source and

destination machines and terminating the connection when transmission

is complete. The protocol offers data flow control, error checking of

Page 54: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

54

packets, acknowledgement of successful receipt of packets after error

checking and packet sequencing.

IP is used to provide the addressing functionality that is required to

ensure packets reach their correct destination. All network devices

should be uniquely identified by their IP address. The IP address is used

by routers to determine the best delivery path for each individual packet.

The format of data transmission includes layers of encoded information.

It works well because of the complex encoding schemes employed.

Page 55: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

55

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)LECTURE D – BINARY ARITHMETIC

Addition 56

The Half Adder 56

The Full Adder – Using Carry in 57

Ripple Addition 59

Look-Ahead Carry 59

Subtraction 61

Multiplication 63

Fixed-Point Arithmetic 65

Fixed-point addition 65

Fixed-point multiplication 66

Floating Point Numbers 67

Page 56: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

56

Lecture D – Binary Arithmetic

Addition

[Hill & Peterson, Chapter 6]

With 2’s complement representation, it is possible to perform both

addition and subtraction using the same hardware. In this lecture the

hardware necessary for this will be introduced and then the concepts will

be extended to include multiplication and the representation of fractions

in computer memory.

The Half Adder

The simplest possible addition is to add together two binary bits, i.e. S =

A+B, (where + means addition here and not a logical OR) where A and B

can be 0 or 1. The set of possible results is:

A B S = A + B

0 0 0

1 0 1

0 1 1

1 1 0 Carry 1

The above truth table describes how the output of the addition changes

for all possible inputs. Note there is an extra bit produced by the addition

shown in the bottom right cell. This bit is called the Carry, and is exactly

Page 57: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

57

the same as the “Carry over” produced, for example, when adding 8 to 6

in decimal (i.e. the next power, or higher digit, is incremented by one).

It can be observed that the truth table is identical to the truth table for the

logic function Exclusive OR logic function. We can write:

S = A ⊕ B and C = A.B (i.e. the logical “AND” of A and B)

The circuit description for the implementation of these functions, which is

known as the half-adder circuit, is shown below:

The Full Adder – Using Carry in

The half-adder allows two bits A and B to be added together. Suppose

that these two bits happen to be in the middle of a group of binary bits.

What is then missing is the ability to deal with a carry-over from less

significant bits (i.e. bits to the right). This is easily remedied by enabling

a carry in bit, as well as a carry out. In the general case the ith bit would

be:

Si = Ai ⊕ Bi ⊕ Carry-ini

The truth table for a full adder, which has A, B, and Carry-in as inputs

and S, and Carry-out as outputs is as shown on the next page.

Page 58: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

58

Ai Bi Cin Sum Carry Out

0 0 0 0 0

0 0 1 1 0

0 1 0 1 0

0 1 1 0 1

1 0 0 1 0

1 0 1 0 1

1 1 0 0 1

1 1 1 1 1

It is simple to show, from the truth tabIe, that

Carry-out = A. B + (A ⊕ B). Carry-in

Hence the combined logic circuit becomes:

Page 59: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

59

Ripple Addition

So far we have added single bits together, taking into account the need

to include carry bits. Real numbers, however, are represented by a

combination of bits and therefore it is necessary to develop a strategy for

dealing with groups of bits that form words. The least significant bit

(LSB) can be handled conveniently using the half adder circuit described

above, but all the other bits must be able to handle carry overs from the

adjacent lower bits. So we need a series of full adders to achieve this.

The carry from each bit addition cascades along the chain as illustrated

below.

One problem with this circuit (as indicated in its name) arises because it

takes time for the effects of the carry bits to “ripple” through from the LSB

end to the MSB. This means that the total sum only becomes correct

after a fixed amount of time, which increases in proportion to n, the

number of bits.

Ripple-Carry adder (using Full and Half Adders)

Look-Ahead Carry

Sometimes it is important to add two n-bit numbers (or words) with

minimal delay. We can perform addition much faster than is offered by

the ripple-carry adder if we compute the carry ahead of the sum. This

may be important when detecting “overflow” in addition, for example.

Page 60: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

60

Overflow occurs when the addition of two n-bit words produces a result

which requires n+1 bits; for example, the addition of two 8-bit numbers

can produce a sum that can only be properly represented with a 9-bit

word. (In this case, the 9th bit can be recognised as the carry bit beyond

the MSB of the 8-bit numbers). Detecting the result of the MSB carry-

over could be more important than the addition itself, and a circuit can be

designed that calculates the carry in advance. This is called “Look-

Ahead” and the combinational logic required to do this is given below.

Note that the addition is not performed any faster, just the computation of

the final carry bit.

Consider the problem of generating Cn, the Carry-out from bits n and n-1.

Three conditions govern the generation of a Carry-out:

1. An = Bn = 1 Carry-out is generated

2. An ≠ Βn Carry is propagated, i.e. generated only if there is

a Carry-in (Carry-out from bit n-1)

3. An = Bn = 0 No Carry-out is generated.

Let Gn denote carry generate, Gn = An.Bn (the “dot” means logical AND).

Let Pn denote carry propagate, Pn = An + Bn. Then:

Carry-out = Cn = Gn + PnCn-1

This is a recurrence relation giving the current Carry-out in terms of the

state of the nth bits of A and B and the previous Carry-out Cn-1. Hence

overall we have:

Cn = Gn + Pn.[Gn-1 + Pn-1.[Gn-2 + Pn-2.[…….[G1 + P1.[G0] ….]

Page 61: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

61

Using this logic expression it is possible to construct a logic circuit that

will predict the overall carry of the addition of two n-bit numbers.

Subtraction

Implementing subtraction can be achieved using the same hardware as

that used for addition. This is possible due to the 2's complement

representation of binary numbers. To obtain A - B, we can compute A +

(B in 2's complement form).

We have already seen that the 2's complement form of a number can be

generated by inverting all the bits and adding a binary “1” to the result.

Achieving the addition of “1” is easily obtained by using the carry-in line,

as shown below:

4-bit Adder/Subtractor

When the Add/Subtract line is equal to “0” (i.e. add) the EX-OR gates do

not modify the value of B. When this line is “1”, however, the action of

the EX-OR gates is to invert all the individual bits of B (in fact generating

Page 62: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

62

the 1's complement). The Carry-in to the least significant bit must be

zero when the subtract is executed, so that the addition of “1” is effected

by the action of the right-hand most EX-OR gate.

Carry and Overflow flags

The Carry bit indicates whether or not the sum of two binary words

creates a “Carry over” from the MSB. But this alone is insufficient when

signed arithmetic is implemented. When two binary numbers are being

added using 2's complement representation, a case may arise when the

sum of two positive numbers might incorrectly be interpreted as a

negative number (which in 2's complement representation would be

identified as a binary number with the MSB equal to 1). Similarly, the

sum of two negative numbers might be incorrectly displayed as a

positive number (i.e. a 0 for the MSB). These errors are a consequence

of the fact that the result of the addition cannot be adequately

represented by the limited number of bits available, and in this case an

overflow must be indicated. For an n-bit binary number, the largest

positive number that can be represented is equal to 2n-1 - 1 and the most

negative number is -2n-1. As an illustration, consider the problem of

adding 65 to 65, using 8-bit binary numbers:

01000001 65

+ 01000001 + 65

____________ _______

10000010 -126

Page 63: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

63

The correct sum (i.e. 130) has a 1 in the MSB, and is therefore

incorrectly interpreted as -126. We need to indicate the fact that an

overflow has occurred. Within the Arithmetic Logic Unit (ALU) in a

microprocessor, there is a specific register that holds a series of bits that

indicate the “condition” of the sum. These condition bits are called flags

and they are useful in a computer program because decisions can be

made in the code based on the values of these flags.

As well as the C flag (Carry), there is the V flag (Overflow). Determining

the V flag is straightforward given the arguments above. For the case of

C = A + B, Overflow has the logic description:

777777 .CB.AC..BAV +=

Multiplication

Integer multiplication may be achieved by successive shifts and adds,

these being the basic operations of long multiplication. Consider the

multiplication of two positive binary numbers C = A x B. Write B as

∑=

=n

0i

ii 2x bB

where bi are the bits of B and are either 1 or zero. Then

( )[ ]∑∑ ====

ii

n

0i

ii b x 2A x 2 x bA x BA x C

However, the bits bi are either one or zero and thus the sum reduces to:

Page 64: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

64

( )∑=

≠=

n

0ib0i

i2A x C

Multiplication by 2i corresponds to shifting left by “i” places and replacing

the right-most vacated bits by zeros. Hence the above multiplication may

be carried out using the following logical operations:

1. Load the n-bit number A into a 2n-bit memory location.

2. Load the n-bit number B into an n-bit memory location.

3. Clear another 2n-bit memory location and allocate it to C.

4. Set a counter count to zero.

5. If bcount is one, add the contents of A to C.

6. Shift A left one place and add 1 to count.

7. If count is less than n, go to 5.

8. End.

The above is an example of pseudo-code. It is a free description, in

plain English, of a set of operations necessary to achieve the

multiplication. Implementing this in hardware is of course possible, but

beyond the limits of this course. Please note also that this simple

algorithm does not work for signed arithmetic, i.e. when using 2's

complement representation.

Page 65: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

65

Fixed-Point Arithmetic

Numerical computation would not progress very far if it were not possible

to represent fractions with binary numbers. After all, this is how we

manipulate numbers in decimal notation (e.g. π = 3.141592…..). The

format is simple and recognised as:

3.141592…. =

3x100 (.) + 1x10-1 + 4x10-2 + 1x10-3 + 5x10-4 + 9x10-5 + 2x10-6 …..

The same is true for binary numbers. So for example:

010111.11 = 1x24 + 0x23 + 1x22 + 1x21 + 1x20 (.) + 1x2-1 + 1x2-2

= 16 + 4 + 2 + 1 + 1/2 + 1/4 = 23.75

It is important to be careful about the position of the “binary” point

however, as computers normally have a discrete number of bits in order

to represent each binary number. There are two clear strategies, either

have a fixed position for the point or implement a procedure that allows it

to move.

Fixed-point addition

When adding A and B to obtain C, both A and B can be shifted left by

exactly the same number of times as there are bits representing the

fractional part of the number. (Remember that shifting a binary number

by one place to the left is equivalent to multiplication by 2.) This

temporarily removes the influence of the binary point and allows addition

to proceed as before.

Page 66: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

66

Hence:

Cinteger part .Cfractional part = Ainteger part .Afractional part + Binteger part .Bfractional part

becomes….

Cinteger part .Cfractional part = (2-f x A) + (2-f x B) = 2-f x (A + B)

where it is assumed that “f” numbers of bits are used to describe the

fractional part of the number. So the addition part of the operation has

not changed; the result just needs to be shifted back the same number of

bits.

Fixed-point multiplication

Applying the same concept gives:

Cinteger part .Cfractional part = Ainteger part .Afractional part x Binteger part .Bfractional part

leading to…..

Cinteger part .Cfractional part = (2-f x A) x (2-f x B) = 2-2f x (A x B)

There can be a problem here, arising from the need to shift the bits by 2f.

Bits may be shifted so many times that they go beyond the number of

bits available to represent the number. This means that an error occurs

and that the result will be incorrect. This is one of the drawbacks of such

a simple method as fixed-point arithmetic. Another obvious problem is

that it is an inefficient use of the available bits (creating a lack of

precision). Allowing the binary point to move is much more practical

and this is called floating-point arithmetic.

Page 67: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

67

Floating Point Numbers

We need to keep track of the position of the binary point, and so it seems

sensible to separate this from the numerical part of the number. Thus:

1011.10101 becomes 1.01110101 x 23

This is identical in form to the exponential representation of decimal

numbers, and can therefore be precisely defined in the standard format:

±1.??????? x 2 ±f

where the “?” refer to the fraction of the number, and “f” the exponent.

Note that it is a sensible strategy always to adjust the exponent of the

number such that the “mantissa” is in the form of “1.???????”, i.e. a one

followed by the binary point, then the fraction. This standard form must

be strictly adhered to, otherwise the floating-point scheme will fail to be

consistent when manipulating numbers.

How do we express this form in a fixed number of binary digits? This

problem was solved a long time ago by adopting what is now called the

IEEE floating-point representation. This has the following format:

So, in the case of a 32 bit number, s is the sign bit (left most bit), the

exponent is the next 8 bits, and the fraction is represented by the

remaining 23 bits. The precision with which numbers can be expressed

using this standard form is determined by the number of bits used in the

fraction, whereas the range of the number is dependent on the bits used

to record the exponent.

In more detail:

Page 68: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

68

Sign bit: This allows us to use “signed arithmetic”. If s = 1, then we have

a negative number.

The exponent: The 8 bit exponent has a numerical range of 0 to 255.

However, it is represented using what it is called “excess 127” format.

This was introduced earlier in these lectures as offset binary. What this

means is that the number 127 is regarded as being equivalent to zero,

so that numbers above 127 are interpreted as positive and those below

as negative. In its simplest form the true value is “(exponent - 127)”, with

the additional restriction that the numbers 00000000 and 11111111 are

reserved to indicate numerical under/over-flow (i.e. the number is

beyond the range).

The fraction: The rules require the number to have its exponent adjusted

so that the mantissa is always “1.??????????….”. As the “1” at the start

of the mantissa is always going to be there, it can be removed from the

full number representation. It is called the implied bit (sometimes

referred to as the “missing bit”). The fraction therefore only needs to

record the binary values to the right of the binary point and essentially

this gives an extra bit of precision in representing the number. This

process is called normalisation and must be applied to all numbers at all

times.

In summary, the 32- bit IEEE floating point representation has the

following format:

(-1)s x 2(exponent - 127) x 1.fraction

Page 69: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

69

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)

LECTURE E – Multiplexers, ROMs and PLAs

Introduction to electronic integration 70

Multiplexers (MUX) 71

Read Only Memory (ROM) 72

The Decoder 76

Uses of ROMs 76

Buses and Tri-state outputs 78

Chip Select and Output Enable 80

Programmed Logic Array (PLA) [Hill & Peterson section 7.5] 82

Page 70: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

70

Introduction to electronic integration

[Hill & Peterson Chapter 7, Green Chapter 5]

So far we have looked at simple circuits with only a few gates,

considering how they are connected together to form a particular logical

function. Modern-day digital electronics is not like this. Electronic

engineers normally develop complex circuits using sophisticated design

tools, that are increasingly moving away from schematic diagrams of

interconnected gates towards high level software languages (such as

VHDL).

It is more cost-effective for circuit designers to rely on the preciseness of

a computer language for describing their designs rather than drawing

large numbers of wires and transistors. With this approach, entire

sections of complex circuitry can be brought together into a large-scale

integrated circuit.

Outside of the major manufacturers of large-scale integrated circuits

such as Intel, there is still a need to go down the traditional route of

designing a printed circuit board and building up the circuit complexity by

connecting relatively simple integrated circuits together. These simple

building blocks may be groups of NAND gates, flip-flops, simple

arithmetic units, logic arrays, etc. The progression is from “small-scale

integration (SSI)” to “medium scale integration (MSI)” and then onto

“very large scale integration (VLSI)” (where the Intels of this world

operate) and towards ULSI. In this lecture, we will consider medium

scale integrated circuits. A rough definition of MSI chips would be a

single integrated circuit containing up to a hundred gates or so, which

might provide the basic building blocks for connecting large sub-units in

a computer together.

Page 71: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

71

Multiplexers (MUX)

Very often in electronic circuits it is necessary to route just one particular

data path onto the next stage in the circuit. A multiplexer is a circuit

component that performs this task. It has a number of data input lines,

usually a single data output line and the appropriate number of bits for

selecting which input is directed towards the output. It is analogous to a

telephone exchange, whereby the data that gets through to a destination

is determined by the switch settings at any of the multiplexers present in

the data path.

The following circuit is a typical multiplexer where one of four inputs, 0D ,

1D , 2D , or 3D are chosen by setting the inputs A and B. The logic

equations are:

3210 ABDBDADBADAB D +++=

Four-input multiplexer

Page 72: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

72

The standard way of representing such a multiplexer in a schematic

circuit is shown below. Note that the lines A and B can be considered

as address lines, represented collectively in this case as G. The 0/3

notation refers to the addressable space, i.e. 0-3 selectable lines. De-

multiplexers are circuits which perform exactly the reverse function.

Read Only Memory (ROM)

[Hill & Peterson section 7.3]

A Read Only Memory is used for permanent data storage. The data to

be stored in a ROM must be decided before it is manufactured as the

binary data is encoded into the photolithographic masks that are used in

semiconductor device fabrication. As this is an expensive process it is

only really economical for mass production. It is also important that the

stored data has been proven to be correct before mass production

begins.

PROMs are programmable ROMs where the storage elements are

fusible links that can be “blown” by applying electrical voltages to the

device terminal - when in write mode. Once these links have been

Page 73: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

73

fused, the data is fixed and cannot be changed. These components are

relatively cheap and ideal for designs with a smaller target market.

EPROMs, “Erasable PROMs”, are a step further. They tend to rely on

charge storage effects in MOSFETs which act in a similar fashion to the

fused links in the ROM. When the EPROM is exposed to UV light, the

data can be erased and the EPROM re-programmed. EEPROMs or

Flash-EPROMs are electrically erasable devices and are ideal for

prototyping software or retaining information that changes infrequently.

All of these devices can be referred to as ROMs.

Structure of a ROM

A ROM is a look-up table, with every data entry considered to have a

unique identifiable address. If the data held in the ROM corresponds to

a 16-bit word, then the data output line is 16 bits wide. If the ROM

contains a total of 1024 such data words then it must have sufficient

input lines to be able uniquely to select any one out of the 1024, i.e. the

ROM must have a 10-bit address. Thus if a ROM has n input lines and m

output lines, then it will have n2 separate addresses, with an m-bit word

stored at each address. Within the ROM, all possible address values

must be decoded which, for some applications, might be an inefficient

approach.

As a trivial example, consider a 3-input, 2-output ROM. Each input is

labelled iA , i = 0, 1, 2 and each output is labelled Di, i = 0,1. There are

therefore 32 possible input addresses, each corresponding to a 2-bit

output. The table at the top of the next page is an example in which the

2-bit data stored are 10, 11, 01, 01, 10, 00, 10 and 11, starting at

address 0 and finishing at address 7.

Page 74: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

74

A2 A1 A0 D1 D00 0 0 1 0 0 0 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 0 1 0 1 0 1 0 0 1 1 0 1 0 1 1 1 1 1

The above is a map of the memory contents of the chip. The columns

0D and 1D may be considered as data or bits 0D and 1D can be

considered as representing two logic functions where:

0120120120120 AAAAAAAAAAAAD +++=

0120120120120121 AAAAAAAAAAAAAAAD ++++=

The logical expressions for 0D and 1D are given in their sum of products

form (each product term being referred to as a minterm – see lecture 2).

Each of the minterms must be decoded within the ROM and its contents

configured to provide the relevant data. We can thus break down the

internal structure of the ROM into three components:

1. A Decoder: This has n input lines and n2 output lines. Decoder

line N is set to logic 1 when the input lines encode the binary

number N.

2. A 2n-by-m matrix of switches that store the individual bits (i.e. the

ROM contents). Row N of the matrix is switched on by line N

from the decoder.

Page 75: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

75

3. An output stage which ORs the outputs from each column of the

switch matrix to generate the required output functions.

An illustration of the workings of a ROM is given in the figure below.

A possible realisation for a simple ROM

The figure shows a 3-to-8 line decoder which handles the addressing.

Only the lines and switches corresponding to decoder lines 0, 1 and 7

are shown - the others are represented as dotted. The details of how

Page 76: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

76

the internal switches and the output section are built are outside the

scope of the course, but for those interested a possible realisation using

a “wired-OR” implementation is shown in figure 7.20 in Hill & Peterson.

The Decoder

It is possible to buy ROMs with a large number of inputs, say greater

than 12. This results in a very complex decoder with a large number of

gate inputs. The example given in Hill & Peterson describes a 12-input

ROM, i.e. a device with 4096 addresses. The decoder would then

require 4096 12-input AND gates to implement all the minterms.

A more efficient decoding strategy is to break the input lines into smaller

groups, say three inputs per group, and then further decode the resulting

groups. A detailed account of this is given in Hill & Peterson section 7.3,

particularly figure 7.17.

Uses of ROMs

As ROMs store data in permanently configured hardware, the data is

regarded as being non-volatile. This means that the data remains

uncorrupted when power is lost. Most computers contain boot PROMs,

which contain the initial instructions that a computer must carry out when

switched on. The data contained in such chips corresponds to very

simple instructions that usually allow the computer to load its operating

system from disk or a network.

As a ROM fully decodes a logic function into its minterms, i.e.

implements a truth table by brute force, it is generally considered to be

an overkill to use one to implement a relatively simple logic function.

However, its ability to store a large number of words of data means that

it is an efficient code converter.

Page 77: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

77

Figure 7.18 in Hill & Peterson is an example of the truth table for a BCD

to seven-segment conversion. A seven-segment display is the typical

display seen on digital watches or cookers which display Roman

numerals by lighting the correct bars from the pattern of 8 shown in the

figure below. A ROM is thus able to take the output from a simple

counter and drive a complex display device directly.

ROM programming example

A 3-bit counter cycles continuously through the numbers 0, 1, 2, 3, 4, 5,

0, 1, 2, …. changing to the next number every 10 seconds. The

numbers 6 and 7 never occur. Design a PROM program which will have

a 3-bit binary number as its inputs and can drive a seven-segment

display. If the number 6 or 7 occurs, only the central segment is to light

to indicate an error.

First, we need to work out which segment needs to be driven for each

input count.

Page 78: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

78

Count Segments 0 a, b, c , d, e, f, 1 b, c 2 a, b, g, e, d 3 a, b, c, d, g 4 f, g, b, c 5 a, f, g, c, d 6 g 7 g

We need a 3-input, 7-output PROM. Label the inputs as A0, A1, A2

(address) and the outputs as a, b, c, d, e, f, and g (D7 to D0). Assume

that the count is represented in straight binary. This then leads to the

following truth table for the PROM:

A2 A1 A0 a b c d e f g 0 0 0 1 1 1 1 1 1 0 0 0 1 0 1 1 0 0 0 0 0 1 0 1 1 0 1 1 0 1 0 1 1 1 1 1 1 0 0 1 1 0 0 0 1 1 0 0 1 1 1 0 1 1 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1

Buses and Tri-state outputs

A major factor in the cost of an integrated circuit is linked to the number

of input and output pins that are required to achieve its functionality.

Often the size of the silicon chip is dictated by the size of the peripheral

frame that encloses the core circuitry. The peripheral frame contains the

bond pads and protection circuitry for these interconnections to the

outside world.

Page 79: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

79

Similarly, in the design of a printed circuit board with many components,

the routing of data and power lines can be an intractable problem. Even

when utilising both sides of the printed circuit board for electrical

connections, it is necessary for integrated circuits to share connections.

These connections are called buses, and typically there are 8, 16, 32 or

64-bit buses for carrying data and address lines.

Bus conflicts will occur if several integrated circuits, connected to the

same bus, try to change the logic values simultaneously. For example,

a ROM chip and an ALU (Arithmetic Logic Unit) chip will probably share

the data bus, but only one of these devices at any time should be

allowed to output data to the bus. Think of what would happen if the

ROM was driving all the data bus bits to a logic 1, whilst the ALU was

trying to put all logic zeros on the bus. The potential difference between

these outputs would cause currents to flow and the actual logic values

taken by the data bus would be undefined.

The simple solution to this problem is to have a third logic state,

implemented in the hardware of the integrated circuit. Devices that use

this approach are described as having Tri-state outputs. The output can

then take one of the following states: logic 0, logic 1 or high impedance.

When the output is in the high-impedance state, the integrated circuit is

effectively isolated from the rest of the circuitry on the printed circuit

board.

An illustration of how this functions is shown in the following figure. The

actual implementation is achieved with MOSFETs if CMOS technology is

being used – see earlier lecture.

Page 80: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

80

A diagrammatic representation of Tri-state logic

The figure above shows a representation of a tri-state logic output stage.

The top switch being closed, with the bottom switch open, corresponds

to a logic 1 at the output. The top switch open, with the bottom switch

closed, corresponds to a logic 0 output. When both top and bottom

switches are open, this corresponds to a floating output – the high-

impedance state in which the gate does not affect the voltage of the

output line.

Chip Select and Output Enable

Most complex logic chips such as ROMs or RAMs (Random Access

Memory) have two control line inputs, one being labelled Chip Select,

the other usually being labelled Output Enable. Chip Select can be

thought of a turning on the input sections of the chip whereas Output

Enable connects the output section to the external pins. Both Chip

Select and Output Enable must be asserted for data to be read out of a

memory chip.

Page 81: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

81

The following figure shows a common circuit symbol for a 256 x 8 bit

ROM.

A 256 by 8 bit ROM

The ROM symbol has small triangles drawn against its outputs – this

indicates that these are tri-state outputs. The small square in the bottom

left hand corner is the symbol for an AND function and indicates that the

inputs at its side must both be true for the whole chip to function and for

the tri-state outputs to exit the high impedance state.

The control input CS1 is active low, which means that it must be a low

voltage to achieve its required function - this is indicated by the small

triangle (sometimes a circle) drawn just above its input line and the bar

above CS1. Such active low signals are often used as chip selects. The

other control input CS2 effectively acts as an output enable. If CS1 has

a low voltage applied to it and CS2 a high voltage applied to it, the ROM

will turn on and drive the 8-bit data bus connected to outputs D0 - D7.

Page 82: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

82

Chip Select is there to switch off the input decoding of the address lines

(which may be changing for other reasons). In CMOS circuitry power is

consumed only when the gates are changing state, so it is wasteful of

power to allow the chip to be continually decoding the address inputs

unnecessarily.

Programmed Logic Array (PLA) [Hill & Peterson section 7.5]

It was stated earlier that a ROM is a highly inefficient implementation of

a logic function. This is because a ROM has to decode all the address

lines fully in order to map all the inputs to the outputs correctly. This can

take several layers of gates, which is an inefficient use of silicon chip

area. It adds to the heat dissipation and introduces unnecessary

propagation delays.

A Programmable Logic Array differs from a ROM in that the input

decoder is uncommitted and may be programmed using a matrix of

fusible links. In order to program a PLA one must define both the

decoder and output stage connections. The electrical connections are

indicated by a set of X’s placed on the input and output wire matrix.

Only where an X occurs should it be assumed that an electrical

connection exists. A simple decoder “program” is illustrated on the next

page.

Note that figure 7.24 in Hill & Peterson confuses the “Inputs” and

“Outputs” labels.

Page 83: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

83

The above is a schematic representation of a programmed PLA. Each

AND gate in the decoder may be driven by up to 6 inputs, formed from

the inputs A, B, C, and their complements. A single line is drawn to

represent each of these 6 inputs. Each OR gate has 4 inputs, each input

corresponding to an AND gate in the decoder. The chip has 3 external

inputs and 3 outputs. The chip thus provides three logic functions f1, f2

and f3, each being functions of 3 external inputs. The number of AND

gates in the decoder govern the complexity of the logic functions that

can be obtained. If we needed to decode the input functions fully, the

chip would need eight 6-input AND gates and would effectively be

identical to a PROM (giving no saving).

The crosses indicate the connections chosen in this case, each cross

denoting a connection. The logic functions implemented are thus:

CBABAf1 +=

Page 84: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

84

CBCBAf2 +=

CCBAf3 +=

Referring back to the figure, the region to the left of the AND gates is

often referred to as the AND-plane, and that to the right as the OR-

plane. The AND-plane essentially constructs all the necessary minterms

and the OR-plane sums the required minterms to create the function.

The availability of cheap programmable arrays has given a huge boost to

the production of electronic circuit boards, especially for small to medium

sized companies. Essentially it means that large array circuits carrying a

significant amount of discrete components can be shrunk into one or two

chips, with all the logic functionality programmed into the array. This has

proved to be a very cost-effective route.

An extension to the PLA is the Field Programmable Gate Array (FPGA),

the details of which are beyond the scope of this course. They are

extremely popular as a circuit design approach that offers flexibility with

a high level of sophistication. Normally a designer uses a computer

package (XILINX is a key software supplier) to enter and check an

electronic design, which is then programmed directly into the FPGA

device using programming hardware connected to the computer’s

parallel port. Most modern FPGAs can be reprogrammed.

Page 85: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

85

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)

LECTURE F – Memory and Latches

Introduction 86

Memory - The SR Latch 87

Transparency 90

The Clocked-Master-Slave Flip-Flops 91

The D-type latch 93

Page 86: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

86

Lecture F – Memory and Latches

Introduction

[Hill & Peterson Chapter 8]

Up to this point all the circuits considered so far can be regarded as

combinatorial, which means that the individual logic gates are not

synchronised in any way. This can sometimes lead to glitches in the

final output of the circuit if the paths to the final output have different

numbers of layers of gates, and hence different delays (race hazards).

This can be troublesome when designing large complex circuits that

need to operate quickly.

Control of the circuit timing is introduced through the use of a centralised

clock, acting like a heartbeat for the whole of the circuitry. At first this

might appear as slowing everything down, but clock rates in excess of 3

GHz are now realisable, which would have seemed unbelievable only a

few years ago.

The starting point for designing circuitry that can be “clocked” is to

introduce feedback, such that the output of a logic gate is connected to

its inputs – which thus influences the output again. To analyse such a

circuit’s behaviour, the passage of time needs to be taken into

consideration.

The idea of using feedback to ensure that the time evolution of a set of

logic gates follows a desired sequence in response to external stimuli is

fundamental to the design of digital computers. The analysis and design

Page 87: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

87

of such circuits will be the object of the rest of this course on sequential

logic.

Memory - The SR Latch

Consider an OR gate with feedback as shown below:

OR gate with feedback

The equation governing the behaviour of this circuit is

Bn+1 = Bn OR A

where the (n+1)th “cycle” follows the nth “cycle”. The truth table

associated with the circuit is as follows:

A Bn Bn+1 0 0 0 0 1 1 1 0 1 1 1 1

If it is assumed that the circuit has been powered up so that the output is

0 and the input is 0, then the output will stay at 0 for all time. If the input

changes from 0 to 1 at any time, then the output will respond by

changing from 0 to 1. However, once the output is 1, then the feedback

ensures that one of the inputs to the OR gate is a 1. Thus the output will

stay at 1. There is thus an order of events that will lead to this circuit

Page 88: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

88

changing its output. The dependency on the order of changes in logic

levels may be shown on a logic-timing diagram, which illustrates the

edge dependencies of this circuit.

Timing diagram showing edge dependency

This circuit has a single binary output and therefore just two output

states, i.e. 0 or 1. It is an example of a very simple latch, a circuit used

to store an event. The output of the latch is said to be either NOT SET

(for logic 0) or SET (for logic 1).

We can see from the truth table that once the input A has changed from

logic 0 to logic 1 the output becomes SET and cannot be NOT SET by

changing A. This is due to the effect of feedback. This sort of circuit is

therefore useful for recording that a single event has occurred; it has

“memory”. However, it would be much more useful if it could be RESET

in some way. This can easily be achieved by introducing a mechanism to

drive the feedback signal to logic 0 (thus removing its influence).

Consider the logical expression for this, assuming that we have added

another input called RESET:

Bn+1 = (Bn AND NOT(RESET)) OR SET

or

Page 89: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

89

( ) ( ) SETRESETBSETRESET.BB nn1n ++=+=+

( ) SETRESETBB n1n ++=+

where De Morgan's law has been used to convert ANDs into NORs. We

now have a useful latch, shown below, that can be set and reset at will.

Figure of modified latch

This circuit may be further re-drawn to emphasise the nature of the

feedback circuitry. Output B is renamed Q (the standard letter used to

label a state output). The “inverted” B signal is also generated as an

output [NOT(Q)]. This implementation of a latch is known as the SR

latch; it is shown here with NOR gates, but it can just as well be

implemented with NAND gates (although, of course, the latter

implementation will have a different truth table).

The SR Latch

(NOR gates)

Page 90: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

90

The SR latch (shown below for a latch implemented with NOR gates)

has a truth table which is different from what we have seen up to now

with combinatorial circuits. The feedback introduces time ordering, and

so we need to consider the output Q in an ordered fashion. Qn is

considered to be the current value of Q, and Qn+1 represents the next

state. We see that it is possible to SET and RESET the output by

choosing correct values for both S and R. The “X” in the table, for

S=R=1 indicates a problem with this simple circuit. It is not possible

simultaneously to SET and RESET the output, and if this were tried the

output would be unpredictable. This input option is considered to be not

allowed.

S R Qn+10 0 Qn 0 1 0 1 0 1 1 1 X

The SR latch is a basic memory element and there is a whole family of

latches based on this simple circuit.

Transparency

As the input values to the latch get modified, the output will change as

quickly as the gates can respond. For such simple circuits this is

typically of the order of a few nanoseconds propagation delay. This

type of response to the input data, being independent of a system clock

for example, is referred to as transparency and therefore this device is

sometimes called a transparent latch.

Page 91: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

91

The Clocked-Master-Slave Flip-Flops

In the Introduction, we mentioned the problems that arise if the

propagation delays in circuits are not accounted for. What is needed is

synchronisation of the circuit(s) into discrete time slots, using a system

clock. The SR latch of the previous section is now used to build clocked

circuitry.

Suppose one wishes to transfer data between two latches in a controlled

manner. As an example, suppose we wish to transfer data into SR latch

number 1 and then into SR latch number 2, as illustrated below.

Two transparent SR latches connected together

The square boxes with S and R inside them are standard symbols for an

SR Transparent Latch, the inverting output being denoted by a small

circle. The input to latch 1 will appear at the output latch 2 after an

uncertain set of gate delays inside the circuit. What is needed is some

means to control the interaction between the two latches, and make their

responses time ordered. Using a clock signal that repeats in an ordered

Page 92: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

92

fashion it is possible to synchronise the latches. The circuit below, a

Master-Slave SR flip-flop, achieves this objective by using AND gates to

synchronise the changing inputs to the latches.

Master-Slave SR flip-flop

In the above circuit, the Q2 outputs only change when the clock makes a

transition from logic 0 to logic 1. (Of course, the actual change occurs

after the normal propagation delays). This means that a mechanism for

synchronising the circuit with a clocking signal has been achieved.

We can see how this works by considering the operation of each

transparent latch as the clock input takes different values. First let us

assume that the clock is at a logic 1 level. For this case the latch on the

left is exposed to the set and reset inputs (as the AND gates pass along

the digital values). So the first latch responds according to the earlier

truth table.

However, because of the inverter in the clock path, the right latch is

isolated from the changes to Q1 and as S = R =0 there is no change in

Q2. Only when the clock goes to logic 0 does the Q data pass along and

Page 93: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

93

Q2 is affected in the normal manner. So the overall output only

responds to the input data once the clock signal has gone low. The first

latch is considered to dictate the operation of the second latch and

hence the term Master-Slave is commonly applied.

As Master-Slave devices are a standard building block, there is a

standard diagrammatic representation as shown below.

Master-Slave SR Flip-Flop

The 1S and 1R denote that the device depends on a control input

number 1. C1 denotes the connection for the control input (clock)

number 1 and the ¬ symbols on the outputs indicate that this is a

Master-Slave device.

The D-type latch

An important type of latch is called the D-type latch, where the “D”

stands for “data”. It is central to most digital designs that rely on

storage of data or states. The D-latch can be constructed using a simple

SR latch as in the following diagram. Here the SR latch is given in its

NAND equivalent form.

Page 94: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

94

The truth table becomes simple, with the Q output taking on the value of

the D input whilst the clock is at logic 1. This is an important feature of a

latch. Whilst the clock is high the output is free to change. When the

clock goes low, the output remains constant and “holds” the last D-value

observable at its input. The truth table and symbol for the D-latch is

shown below and its behaviour is illustrated in the timing diagram.

D CLK Qn+1 X 0 Qn 0 1 0 1 1 1

Page 95: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

95

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)

LECTURE G – Clock Mode Circuits

The Clock Pulse 96

The Edge-Triggered D-type Flip-Flop 96

Shift register 99

Asynchronous (ripple) counter 101

D-type flip-flop with reset 101

The Digital State Revisited 103

Storing the State 104

State Transition Tables 105

Page 96: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

96

The Clock Pulse

Circuits using just combinational logic and transparent latches are called

fundamental mode circuits. The uncertainty in timing delays throughout

various parts of such circuits can be a serious problem. Instead,

whenever possible, a preferred option is to have the elements of the

circuit responding to the edge of a system clock. In this lecture, we

focus on edge-triggered components which are referred to as either

pulse mode or clock mode circuits. Let us start by examining a clock

pulse and defining various attributes.

The clock pulse has two transitions, from logic 0 to 1 and from 1 to 0.

These are called the positive and negative edges of the pulse

respectively (or sometimes the leading and trailing edges). A positive

edge-triggered flip-flop responds to its current data inputs exactly at the

0 to 1 transition of the clock pulse, modifying its output a short

propagation delay afterwards. Similarly a negative edge-triggered flip-

flop responds to its input data values on the 1 to 0 transition, producing a

modified output shortly afterwards.

The Edge-Triggered D-type Flip-Flop

Data storage in pulse mode circuits is almost always achieved using a

D-type flip-flop, for storing one bit of data. The circuit symbol for the

Page 97: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

97

positive edge-triggered variety is shown below. Like the D-latch, it has a

single “data” input and a “Q” output, but often it is shown with a nQ

output as well. The only control signal it receives is the clock input.

The operation of the edge-sensitive D-type flip-flop is best illustrated with

its truth table. Remember that Qn+1 refers to the next changed state of

the Q output.

D C Qn+1 0 0 Qn 1 0 Qn 0 1 Qn 1 1 Qn X ↑ Dn

Notice that Qn+1 remains the same as Qn in all cases except when the

clock pulse takes a positive transition (indicated by the up arrow). For

this case whatever the value of D is gets copied to Qn+1 after the

inevitable propagation delay. The value of D copied to Qn+1 is the value

exactly at the transition edge. Shortly after this transition the value of D

can change but it will not alter the Q output. In this way the value of D

has been stored until the next clock pulse comes along.

Page 98: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

98

The effects of the clock edge transition can be further illustrated using a

timing diagram as shown below. Note that the behaviour is different

from the D-latch.

A D-type flip-flop circuit, based on the NAND implementation of the SR

latch together with the master-slave arrangement, is shown here:

There is occasionally some confusion (certainly on the web) in the

definitions of a latch and a flip-flop. A latch is defined by the fact that the

output changes when an input changes (of course, after some delay)

whereas in a flip-flop, the output only changes when the clock changes.

SR flip-flops can be made either as level-triggered or edge-triggered

devices. D-type flip-flops can only ever be edge-triggered.

Page 99: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

99

Shift register

[Hill & Peterson figure 8.10]

A shift register is a good illustration of how D-type flip-flops work in clock

mode. In this example the flip-flops are connected in series and the

serial input data is passed from one flip-flop to the other down the chain.

The clock mode assumption is very important here. On every negative

clock edge, data present on “Serial in” is clocked into flip-flop number 1;

at the same time, the content of flip-flop 1 is clocked into flip-flop 2. The

data present on the input of each flip-flop is thus clocked at the same

time and the “old” data shifts from left to right by one place at every

negative clock edge.

A 3-Stage Shift Register with D-type Flip-flops

The data moves in discrete time steps, and the behaviour of the circuit is

therefore very different from that of the ripple adder circuit discussed

earlier that had an unpredictable propagation delay before the outputs

became valid. In this case the data will always be valid after a period

equal to the propagation delay of one flip-flop, regardless of how many

are cascaded together.

Page 100: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

100

The timing diagram for the 3-stage shift register is shown below. Note

how the outputs change (after the propagation delay) on the negative

edge of the clock signal.

Page 101: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

101

Asynchronous (ripple) counter

D-type flip-flops can also be used to implement a ripple counter, which is

an asynchronous circuit. The 4-bit ripple counter, shown below, consists

of 4 D-type flip-flops, with each flip-flop holding the value of one of the

binary bits.

The counter pulses can be counted from the bit sequence b3b2b1b0 =

0000 up to 1111 (15). The effect of the first flip-flop ripples along the

cascade, and so the count does not settle until each flip-flop has been

affected in turn. The propagation delay of each flip-flop therefore

determines the overall counter speed. This is one of the reasons why

synchronous designs are better as counter speed for those designs is

usually limited by the propagation delay of just one flip-flop.

A 4-bit ripple counter implemented with D-type flip-flops

Page 102: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

102

D-type flip-flop with reset

Clearly the counter needs to be reset to zero in some cases, and

therefore it is necessary to have access to D-type flip-flops with a reset

or clear line. Such flip-flops are useful in asynchronous counters that

have a truncated count sequence, e.g. a modulo-6 counter. The symbol

below is for a flip-flop with an active low reset line.

D C Reset Qn+1 X 0 1 Qn X 1 1 Qn 0 ↑ 1 0 1 ↑ 1 1 X X 0 0

Page 103: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

103

The Digital State Revisited

The drawback with the asynchronous counter circuit is the use of a

feedback path that causes an asynchronous interaction with the discrete

states of the flip-flops. In the rest of this lecture course, we will

concentrate on clock-mode designs in which the state is defined fully by

the current outputs of the flip-flops within the circuit and there is no

feedback to the transparent inputs.

From a specific state the circuit moves to the next desired stable state at

the clock transition. At the next clock pulse the circuit again moves from

its current state to the next designed state, and so on. Some

combinational logic may be required to achieve this by decoding the

current state and any inputs to the circuit, but essentially the progression

through the states can only happen synchronised to the clock signal.

The combinational logic elements, as well as the system clock and the

flip-flops which store the current state, are arranged in the configuration

shown on the next page. The current state of the system is defined by

the outputs of the storage elements. Paradoxically, the next state is

defined by the inputs to the storage elements, but this next state will not

be entered until the controlling clock pulse occurs. The outputs are

defined as a logical function of the current inputs and the current state.

Page 104: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

104

Storing the State

The data storage elements can be implemented most simply by using an

array of D-type flip-flops, all driven by the system clock. Each flip-flop

corresponds to a single bit of storage. On each clock edge, the values

provided by the combinational logic are recorded.

As an example, the following circuit shows 4 D-type flip-flops that can

represent 16 unique states (i.e. 24). The data is clocked through on the

positive edge of the clock and inputs to the flip-flops define the next state

of the machine.

Page 105: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

105

State Transition Tables

The necessary tools for designing the combinational logic now need to

be developed. One way of describing how a circuit functions is to draw a

table similar to a Karnaugh Map. The rows correspond to the current

state and the columns to the values of the inputs. The entries in the

table correspond to the next state.

The table shown on the next page is an example of a transition table.

Note that the entries for this table have no particular meaning and were

just chosen for illustration.

x1x2

Page 106: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

106

y1y2 00 01 11 10 00 00 00 10 01 01 01 00 11 01 11 01 10 11 11 10 00 10 10 11

Y1Y2

The leftmost column is labelled y1y2 and corresponds to all possible

states for this system. We have four states defined by two state

variables y1y2, giving rise to four rows. There are two inputs, x1 and x2,

that give rise to the four columns. Sometimes the internal variables y1y2

are called the secondary variables, whereas the inputs x1x2 are called

the primary variables. The whole table is labelled Y1Y2, which means

that the entries of the table correspond to the next state (the current

state being denoted by the lower case y’s and the next state by the

upper case Y’s).

Consider the first row of the transition table. If the system is in state 00,

then it remains in this state after the next clock edge if the input is 00 or

01. If the input is 11 or 10 there will be a state transition at the clock

edge. Changing the input values does not change the state

immediately, thus a change in input just corresponds to moving to

another column in the same row. A row change only occurs at a clock

edge, the next row being defined by the entry in the current table

position.

Page 107: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

107

P2 Digital Logic- notes by L. Tarassenko (Lecturer D.C.O'Brien)

LECTURE H – Design of Sequential Logic

Synchronous Counters 108

Generating the Transition Table 108

Using Registers to Store the State - ROM Based Counters 109

Sequencers – number of states required 111

Synchronous D-type flip-flop designs 115

The Steering Table (or Transition List) 116

Conclusion 118

Page 108: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

108

Lecture H – Design of Sequential Logic

Synchronous Counters

[Hill & Peterson, Sections 8.5 to 8.7 inclusive]

With synchronous counters, all state transitions occur on a clock edge.

Sophisticated designs can be obtained using a Programmable Read

Only Memory (PROM) instead of discrete gates in a combinational logic

circuit, but both approaches will be explored in this lecture.

Generating the Transition Table

Consider firstly a simple counter example, say a modulo-5 or divide-by-6

counter (i.e. one having a count sequence 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5,

0, 1,…) For such a counter, there are 6 states, which requires 3 bits.

The transition table below shows the current state in the left column

taking the lower case labels y2y1y0, and the entries referring to the next

state occurring on one of the clock edges, taking the upper case labels

Y2Y1Y0.

Current State Next Statey2y1y0

000 001 001 010 010 011 011 100 100 101 101 000

Y2Y1Y0

Page 109: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

109

Using Registers to Store the State - ROM Based Counters

The diagram of a state machine from the previous lecture is reproduced

below, but one modification will now be introduced. Instead of using

combinational logic, we will use a ROM to decode the next state.

Simple synchronous counters have no inputs, and the outputs are the

states stored in the D-type flip-flops. The above diagram can therefore

be modified, showing a circuit having no additional external inputs and a

PROM replacing the combinational logic. The resulting circuit for the

divide-by-6 counter is shown on the next page.

Page 110: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

110

In the ROM-based design the next state is the data output from the

ROM in response to the current state, stored by the flip-flops. The

current state can also be considered as the address input to the ROM.

At every clock edge, the data output from the ROM is transferred to the

D-type flip-flops. The clock-mode restriction ensures that the only data

clocked into the flip-flops is that being generated by the ROM in

response to the current state. It is the fact that the D-type flip-flops are

edge-triggered that ensures that the above circuit works.

Data stored in the PROM

The PROM has a three-bit address (A2A1A0) and provides a three-bit

output (D2D1D0). In this simple example the contents of the PROM,

pointed to by the current address, must simply contain the next state.

The necessary data content of the PROM is given in the following table.

Page 111: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

111

A2A1A0 D2D1D0000 001 001 010 010 011 011 100 100 101 101 000 110 XXX 111 XXX

Note that there are two “don’t care” lines in the truth table as the circuit

never reaches these states (remember that for a ROM all states have to

be decoded, unlike a PLA design approach).

What should happen if the machine is powered up into one of these two

unimplemented states? A good circuit designer would consider this

possibility and probably decide to put the data values 000 into the PROM

locations 110 and 111. In this way if any one of these unimplemented

states are entered by mistake then the machine is steered to the initial

state and counting would proceed as normal.

Sequencers – number of states required

The sequence for the modulo-5 or divide-by-6 counter is just the count

from 0 to 5 repeated over. These 6 states could also refer to a different

count sequence, for example, the repeated sequence 0, 1, 3, 5, 2, 4, 0,

1, 3, 5, 2…. This is easy to achieve, as all that is required is for the

PROM to be re-programmed as shown on the next page.

Page 112: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

112

A2A1A0 D2D1D0000 001 001 011 010 100 011 101 100 000 101 010 110 XXX 111 XXX

This illustrates the advantages of the PROM approach, compared with

decoding using combinational logic (see later). With the latter, a simple

change in the state sequence would require a complete re-design of the

combinational logic circuitry.

It is not always possible to design a circuit in which the states are used

as the outputs directly. This problem arises when there is not a unique

“next state” corresponding to each output. As an example consider the

following problem:

Example: A Two-Phase Clock Generator

In CMOS logic design, it is often necessary to generate a clocking signal

consisting of two synchronised, non-overlapping clocks. The two-phase

clock generator circuit needs to be driven by a single clock input and

produce two non-overlapping clock outputs Φ1 and Φ2 as shown in the

next figure. The preferred design approach is to use a ROM-based

sequencer using D-type flip-flops as the storage elements, with the

outputs being the states of the flip-flops.

Page 113: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

113

The two non-overlapping clocks shown above change state on the

positive, or leading, edge of the clock, and so positive edge-triggered

devices are needed. However, since the outputs Φ2Φ1 follow the

sequence 00, 01, 00, 10, 00, 01,….., there is no unique state that follows

the state 00 (it can be 01 or 10). This may be overcome by defining a

third variable aux, which is a logical 1 for one of the 00 states and a

logical 0 for the other, as shown in the revised timing diagram below:

The sequence is now 000, 101, 100, 010, 000, 101,…. and there are two

unique states associated with the non-overlapping clock outputs 00.

Page 114: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

114

The transition table is:

Current State Next State aux φ2φ1

000 101 101 100 100 010 010 000

AUX Φ2Φ1

The transition table for the ROM is:

A2A1A0 D2D1D0000 101 001 XXX 010 000 011 XXX 100 010 101 100 110 XXX 111 XXX

Finally, the corresponding circuit implementation for the non-overlapping

clock generator is given on the next page, where the PROM contains the

data presented in the above table.

Page 115: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

115

The above example demonstrates that the number of states required to

implement a sequence depends on the length of the sequence, and not

just the number of independent combinations of outputs required in the

sequence.

Synchronous D-type flip-flop designs

The finite-state machines introduced above have used a combination of

both D-type flip-flops and a PROM. Although this is a versatile and

powerful design approach, it is not the only option and the need to fully

encode all the memory locations of the PROM can be a disadvantage for

simple designs. The latter can make use instead of combinational logic

to drive the D-type flip-flops into the next state.

Page 116: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

116

The Steering Table (or Transition List)

[Hill & Peterson Section 8.6]

When designing sequential circuits, it is useful to determine what value

of D should be chosen to achieve the desired state transition at the Q

output, assuming the current state is known. The steering table

identifies what value is needed on the D input in order to obtain the

desired transition on the clock edge. For the D-type flip-flop, it is easy to

derive the steering table (shown on the right below) from the truth table:

Desired D Qn Qn+1 Transition Transition D 0 0 0 0 → 0 = 0 0 → 0 = 0 0 1 0 1 0 → 1 = H 0 → 1= H 1 1 1 1 1 → 1 = 1 1 → 1= 1 1 0 1 0 1 → 0 = L 1 → 0 = L 0

As an exercise in the design of a sequential logic circuit, consider a

simple synchronous counter that counts down from 111 to 000 and back

to 111, in a continuous sequence. The state transition table for a D-type

implementation is given below, with the desired transitions on the right:

Current State Next State 1n

2Q + 1n1Q + 1n

0Q + Desired

n2Q n

1Q n0Q Transitions

1 1 1 (7) 1 1 0 1 1 L 1 1 0 (6) 1 0 1 1 L H 1 0 1 (5) 1 0 0 1 0 L 1 0 0 (4) 0 1 1 L H H 0 1 1 (3) 0 1 0 0 1 L 0 1 0 (2) 0 0 1 0 L H 0 0 1 (1) 0 0 0 0 0 L 0 0 0 (0) 1 1 1 H H H

Page 117: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

117

Using K-maps, it is now possible to determine the logic gates to

implement the combinational logic to drive the 3 D-type flip-flops. The K-

maps, logic functions and corresponding circuit are given below.

A D-type implementation of a “down” synchronous counter

Page 118: P2 Digital Logic- notes by L Tarassenko (Lecturer D.C.O'Brien)libvolume8.xyz/zcommon1/btech/semester1/basic... · Binary arithmetic: adders/subtractors, fixed- and floating-point

118

Conclusion

This lecture has shown how the above architecture can be used to

implement synchronous counters. These are simple examples of finite-

state machines, which are defined by their states (of which there is a

finite number), transitions between these states and actions or outputs

associated with these states. The concept of a finite-state machine is at

the heart of the theory of computing, as it allows for a description of how

bits of information can be handled by a sequential machine. If we

introduce a branching capability which allows the sequence of states to

change according to an input condition, we have most of the elements

required for understanding how a computer works…