Sunday, October 18, 2020

DE unit 1

 UINT – 1: DESIGN CONCEPTS

1.1. DIGITAL HARDWARE:

1. Logic circuits are used to build computer hardware, as well as many other types of products. All such products are broadly classified as digital hardware. 

2. The reason that the name digital is used, it derives from the way in which information is      represented in computers, as electronic signals that correspond to digits of information.

3. The technology used to build digital hardware has evolved dramatically over the past few      decades. 

4. Until the 1960s logic circuits were constructed with bulky components, such as transistors and resistors that came as individual parts. 

5. The advent of integrated circuits made it possible to place a number of transistors, and thus an entire circuit, on a single chip.

6. In the beginning these circuits had only a few transistors, but as the technology improved     they became more complex. 

7. Integrated circuit chips are manufactured on a silicon wafer, such as the one shown in below figure. 

8. The wafer is cut to produce the individual chips, which are then placed inside a special type of chip package. 

9. By 1970 it was possible to implement all circuitry needed to realize a microprocessor on a single chip. 

10. Although early microprocessors had modest computing capability by today’s standards, they opened the door for the information processing revolution by providing the means for implementation of affordable personal computers.

 

Fig. A silicon wafer

11. About 30 years ago Gordon Moore, chairman of Intel Corporation, observed that integrated circuit technology was progressing at an astounding rate, approximately doubling the number of transistors that could be placed on a chip every two years. This phenomenon, informally known as Moore’s law, continues to the present day.

12. Presently, chips can be manufactured containing billions of transistors.

13. International Technology Roadmap for Semiconductors (ITRS) produces a forecast of how the technology is expected to evolve this forecast discusses many aspects of technology, including the maximum number of transistors that can be manufactured on a single chip. A sample of data from the ITRS is given in below figure.


 

Fig. An estimate of the maximum number of transistors per chip over time.

14. The roadmap predicts that chips with as many as 100 billion transistors will be possible by the year 2022.

15. The designer of digital hardware may be faced with designing logic circuits that can be implemented on a single chip or designing circuits that involve a number of chips placed on a printed circuit board (PCB).

16. There exists a large variety of chips that implement various functions that are useful in       the design of digital hardware. The chips range from simple ones with low functionality      to extremely complex chips.

17. For example, a digital hardware product may require a  microprocessor to perform some arithmetic operations, memory chips to provide storage  capability, and interface chips that allow easy connection to input and output devices. Such chips are available from various vendors.

18. For many digital hardware products, it is also necessary to design and build some logic       circuits from scratch. 


For implementing these circuits, three main types of chips may be used: 

1. Standard chips

2. Programmable logic devices 

3. Custom chips

1.1.1 Standard chips:

Each standard chip contains a small amount of circuitry (usually involving fewer than 100 transistors) and performs a simple function.

To build a logic circuit, the designer chooses the chips that perform whatever functions are needed and then defines how these chips should be interconnected to realize a larger logic circuit.

Standard chips were popular for building logic circuits until the early 1980s.

As integrated circuit technology improved, it became inefficient to use valuable space on PCBs for chips with low functionality. 

Another drawback of standard chips is that the functionality of each chip is fixed and cannot be changed.

1.1.2. Programmable logic devices:

In contrast to standard chips that have fixed functionality, it is possible to construct chips that contain circuitry which can be configured by the user to implement a wide range of different logic circuits.

These chips have a very general structure and include a collection of programmable switches that allow the internal circuitry in the chip to be configured in many different ways.

The designer can implement whatever functions are required for a particular application by setting the programmable switches as needed.

The switches are programmed by the end user, rather than when the chip is manufactured. Such chips are known as programmable logic devices (PLDs).

PLDs are available in a wide range of sizes, and can be used to implement very large logic circuits. The most commonly-used type of PLD is known as a field-programmable gate array (FPGA).

The largest FPGAs contain billions of transistors and support the implementation of complex digital systems that is the reason FPGAs are widely used today.

Drawback in that the programmable switches consume valuable chip area and limit the speed of operation of implemented circuits.

Thus in some cases FPGAs may not meet the desired performance or cost objectives





 

Fig. An FPGA board


1.1.3. Custom chips:


It is possible to design a chip from scratch; namely, the logic circuitry that must be included on the chip is designed first and then the chip is manufactured by a company that has the fabrication facilities. This approach is known as custom or semi-custom design, and such chips are often called application-specific integrated circuits (ASICs).

The main advantage of a custom chip is that its design can be optimized for a specific task hence it usually leads to better performance.

It is possible to include a larger amount of logic circuitry in a custom chip than other types of chips.

The cost of producing such chips is high, but if produce in large final cost will become less 

A disadvantage of the custom-design approach is that manufacturing a custom chip often takes a considerable amount of time, on the order of months.

1.2. DESIGN PROCESS:

Introduction:

1. The availability of computer-based tools has greatly influenced the design process in a wide variety of environments.

2. Certain steps in the development cycle must be performed if the final product is to meet the specified objectives.

3. The flowchart in Figure depicts a typical development process. We assume that the process is to develop a product that meets certain expectations. 

4. The most obvious requirements are that the product must function properly, that it must meet an expected level of performance, and that its cost should not exceed a given target.

Development Process:

1. The process begins with the definition of product specifications. The essential features of the product are identified, and an acceptable method of evaluating the implemented features in the final product is established. 

2. The specifications must be tight enough to ensure that the developed product will meet the general expectations, but should not be unnecessarily constraining.

3. From a complete set of specifications, it is necessary to define the general structure of an initial design of the product. This step is difficult to automate. It is usually performed by a human designer because there is no clear-cut strategy for developing a product’s overall structure—it requires considerable design experience and intuition.

4. After the general structure is established, CAD tools are used to work out the details. Many types of CAD tools are available, ranging from those that help with the design of individual parts of the system to those that allow the entire system’s structure to be represented in a computer.

5. When the initial design is finished, the results must be verified against the original specifications.


6. CAD tools enable designers to simulate the behavior of incredibly complex products, and such simulations are used to determine whether the obtained design meets the required specifications. If errors are found, then appropriate changes are made and the verification of the new design is repeated through simulation.


7. Although some design flaws may escape detection via simulation, but the most problems are discovered in this way.


8. When the simulation indicates that the design is correct, a complete physical prototype of the product is constructed.


9. The prototype is thoroughly tested for conformance with the specifications. Any   errors revealed in the testing must be fixed. The errors may be minor, and often they can be eliminated by making small corrections directly on the prototype of the product. In case of large errors, it is necessary to redesign the product and repeat the steps explained above.

10. When the prototype passes all the tests, then the product is deemed to be    successfully   designed and it can go into production.


 

Fig. The development process.



1.3. DESIGN OF DIGITAL HARDWARE:

1. To understand the role that logic circuits play in digital systems, consider the structure of a typical computer, as illustrated in Figure.

2. The computer case houses a number of printed circuit boards (PCBs), a power supply, and (not shown in the figure) storage units, like a hard disk and DVD or CD-ROM drives. Each unit is plugged into a main PCB, called the motherboard.

3. As indicated in the figure, the motherboard holds several integrated circuit chips, and it provides slots for connecting other PCBs, such as audio, video, and network boards.


 

Fig. A digital hardware system (part a)


4. Below figure illustrates the structure of an integrated circuit chip. The chip comprises

a number of sub circuits, which are interconnected to build the complete circuit. 


5. Examples of sub circuits are those that perform arithmetic operations, store data, or control the flow of data. Each of these sub circuits is a logic circuit. 


6. As shown in the middle of the figure, a logic circuit comprises a network of connected logic gates. Each logic gate performs a very simple function, and more complex operations are realized by connecting gates together.


7. Logic gates are built with transistors, which in turn are implemented by fabricating various layers of material on a silicon chip.


 

Fig. A digital hardware system (part b)


8. Logic Circuit Design can do with the help of different CAD tools like Altera, Cadence, Mentor Graphics, Synopsys, and Xilinx.


1.4. DIGITAL REPRESENTATION OF INFORMATION:

1. Information is represented in logic circuits as electronic signals. Each of these signals can be thought of as representing one digit of information. 

2. To make the design of logic circuits easier, each digit is allowed to take on only two possible values, usually denoted as 0 and 1. 

3. These logic values are implemented as voltage levels in a circuit; the value 0 is usually represented as 0 V (ground), and the value 1 is the voltage level of the circuit’s power supply. Typical power-supply voltages in logic circuits range from 1 V DC to 5 V DC.

4. In general, all information in logic circuits is represented as combinations of 0 and 1     digits.

5. Before going to logic circuits next, it will be helpful to examine how alphanumeric data (text) numbers,, and other information can be represented using the digits 0 and 1.

1.4.1. Binary Numbers:

 1. In the familiar decimal system, a number consists of digits that have 10 possible             values, from 0 to 9, and each digit represents a multiple of a power of 10. 

 2. The number 8547 represents  8×〖10〗^3+5×〖10〗^2+4×〖10〗^1+7×〖10〗^0 

 3. We do not normally write the powers of 10 with the number, because they are    implied by the positions of the digits. 

 4. In general, a decimal integer is expressed by an n-tuple comprising n decimal digits

D = d_(n-1 ) d_(n-2)…..d_1 d_0Which represents the value

V (D) = d_(n-1 )×〖10〗^(n-1)+ d_(n-2 )×〖10〗^(n-2)+⋯+d_1×〖10〗^1+d_0×〖10〗^0

       This is referred to as the positional number representation

 5. Because the digits have 10 possible values and each digit is weighted as a power of 10, we say that decimal numbers are base-10 numbers. Decimal numbers are familiar,       convenient, and easy to understand.

6. Digital circuits represent information using only the values 0 and 1, it is not practical to have digits that can assume ten values. In these circuits it is more appropriate to use the binary, or base-2, system which has only the digits 0 and 1. Each binary digit is called a bit.

7.  In the binary number system, the same positional number representation is used so that

  B = b_(n-1 ) b_(n-2)…..b_1 b_0

Represents an integer that has the value

V (B) = b_(n-1 )×2^(n-1)+ b_(n-2 )×2^(n-2)+⋯+b_1×2^1+b_0×2^0

                                          =  ∑_(i=0)^(n-1)▒〖b_(i )× 2〗^i 

For example, the binary number 1101 represents the value

V = 1 × 2^3 + 1 ×2^2  + 0 ×2^1  + 1 × 2^0

8. Thus to specify that 1101 is a base-2 number, we will write (1101)2. Evaluating the     preceding expression for V gives V = 8 + 4 + 1 = 13. Hence 

〖(1101)〗_(2 )  = 〖(13)〗_(10 )


Table. Numbers in decimal and binary 

 


9. The range of integers that can be represented by a binary number depends on the            number of bits used.

10. Table lists the first 15 positive integers and shows their binary representations using four  bits. An example of a larger number is 〖(10110111)〗_(2 ) = 〖(183)〗_(10 ).In general, using n bits allows representation of positive integers in the range 0 to 2^(n-1).

11. In a binary number the right-most bit is usually referred to as the least-significant bit   (LSB). The left-most bit, which has the highest power of 2 associated with it, is called the most-significant bit (MSB). In digital systems it is often convenient to consider several bits together as a group. A group of four bits is called a nibble, and a group of eight bits is called a byte.

1.4.2. Conversion between Decimal and Binary Systems:

1. A binary number is converted into a decimal number simply by applying equation and evaluating it using decimal arithmetic.

2. Converting a decimal number into a binary number is not quite as straightforward, because we need to construct the number by using powers of 2. 

3. For example, the number 〖(17)〗_(10 ) is 2^4+2^0=〖(10001)〗_(2 ), and the number 〖 (50)〗_(10 )   is 2^5 +2^4 +2^1 = 〖(110010)〗_(2 ).

4. In general, the conversion can be performed by successively dividing the decimal    number by 2 as follows. Suppose that a decimal number D = d_(n-1 ) d_(n-2)…..d_1 d_0 with a value V, is to be converted into a binary number B = b_(n-1 ) b_(n-2)…..b_1 b_0 Then, we can write V in the for V = b_(n-1 )×2^(n-1)+ b_(n-2 )×2^(n-2)+⋯+b_1×2^1+b_0×2^0

5. If we now divide V by 2, the result is V/2  =  b_(n-1 )×2^(n-2)+⋯+b_2×2^1+b_1+ b_0/2

6. The quotient of this integer division is   b_(n-1 )×2^(n-2)+⋯+b_2×2^1+b_1 and the    remainder is b_0 ,if the remainder is 0, then  b_0  =  0,if it is 1 , then  b_0 = 1

7. Observe that the quotient is just another binary number, which comprises n − 1 bits, rather than n bits. Dividing this number by 2 yields the remainder b_1.The new quotient is  b_(n-1 )×2^(n-3)+⋯+b_2 .

8. Continuing the process of dividing the new quotient by 2, and determining one bit in each step, will produce all bits of the binary number. The process continues until the quotient becomes 0.

9. Below figure illustrates the conversion process, using the example (857)10 =

    (1101011001)2. Note that the least-significant bit (LSB) is generated first and the most significant bit (MSB) is generated last.


 

Fig. Conversion from decimal to binary.


1.4.3. ASCII Character Code:

1. Alphanumeric information, such as letters and numbers typed on a computer keyboard, is represented as codes consisting of 0 and 1 digits. The most common code used for this type of information is known as the ASCII code, which stands for the American Standard Code for Information Interchange.

2. The code specified by this standard is presented in the following table


Table. The seven-bit ASCII code

 


3. The ASCII code uses seven-bit patterns to denote 128 different characters. Ten of    the characters are decimal digits 0 to 9. As the table shows, the high-order bits have the same pattern, b_6 b_5 b_4 = 011, for all 10 digits. Each digit is identified by the low-order four bits,b_(3-0), using the binary patterns for these digits.

4. The ASCII code is used to encode information that is handled as text. It is not convenient for representation of numbers that are used as operands in arithmetic operations. For this purpose, it is best to convert ASCII-encoded numbers into a binary representation that we discussed before.

5. The ASCII standard uses seven bits to encode a character. In computer systems a more

natural size is eight bits, or one byte. There are two common ways of fitting an ASCII encoded character into a byte. One is to set the eighth bit, b7, to 0. Another is to use this bit to indicate the parity of the other seven bits, which means showing whether the number of 1s in the seven-bit code is even or odd.


1.4.4. Digital and Analog Information:

1. Binary numbers can be used to represent many types of information. For example, they can represent music that is stored in a personal music player.

2. Below figure illustrates a music player, which contains an electronic memory for storing music files. A music file comprises a sequence of binary numbers that represent tones. 

3. To convert these binary numbers into sound, a digital-to-analog (D/A) converter circuit is used. It converts digital values into corresponding voltage levels, which create an analog voltage signal that drives the speakers inside the headphones. 

4. The binary values stored in the music player are referred to as digital information, whereas the voltage signal that drives the speakers is analog information.


 

Fig. Using digital technology to represent music.


      

1.5. INTRODUCTION TO LOGIC CIRCUITS:

1. Logic circuits are used in digital computers.

2. Such circuits also form the foundation of many other digital systems, such as those that perform control applications or are involved in digital communications. All such applications are based on some simple logical operations that are performed on input information.

3. Information in computers is represented as electronic signals that can have two discrete values. Although these values are implemented as voltage levels in a circuit, we refer to them simply as logic values, 0 and 1.

4. Any circuit in which the signals are constrained to have only some number of discrete values is called a logic circuit. Logic circuits can be designed with different numbers of logic values, such as three, four, or even more.

5. The binary logic circuits that have two logic values.

6. Binary logic circuits have the dominant role in digital technology.

1.5.1. Variables and Functions:

1. The dominance of binary circuits in digital systems is a consequence of their simplicity, which results from constraining the signals to assume only two possible values.

2. The simplest binary element is a switch that has two states. If a given switch is controlled by an input variable x, then we will say that the switch is open if x = 0 and closed if x = 1, as illustrated in Figure


 

Fig. A binary switch

3. Consider a simple application of a switch, where the switch turns a small light bulb on or off. This action is accomplished with the circuit in Figure

4. A battery provides the power source. The light bulb glows when a sufficient amount of current passes through it.

5. The current flows when the switch is closed, that is, when x = 1. In this example the input that causes changes in the behavior of the circuit is the switch control x.

6. The output is defined as the state (or condition) of the light, which we will denote by the letter L. If the light is on, we will say that L = 1. If the light is off, we will say that L = 0.

7. Using this convention, we can describe the state of the light as a function of the input variable x. Since L = 1 if x = 1 and L = 0 if x = 0, we can say that L(x) = x

8. This simple logic expression describes the output as a function of the input. We say that L(x) = x is a logic function and that x is an input variable.

9. Consider now the possibility of using two switches to control the state of the light. Let x_1 and x_2 be the control inputs for these switches. The switches can be connected either in series or in parallel as shown in Figure 


 

Fig. A light controlled by a switch


 

Fig. Two basic functions

10. Using a series connection, the light will be turned on only if both switches are closed. If  either switch is open, the light will be off.

11. This behavior can be described by the expression

L(x_1, x_2) = x_1 • x_2

Where L = 1 if x_1 = 1 and x_2 = 1,

L = 0 otherwise

   

12. The “•” symbol is called the AND operator, and the circuit in Figure a is said to  implement a logical AND function.

13.  The parallel connection of two switches is given in Figure b. In this case the light will be   on if either the x_1 or x_2 switch is closed. The light will also be on if both switches are closed. The light will be off only if both switches are open.

14. This behavior can be stated as

L(x_1, x_2) = x_1 +  x_2

Where L = 1 if x_1 = 1 or x_2 = 1 or if x_1 = x_2 = 1,

L = 0 if x_1 = x_2 = 0.

15. The + symbol is called the OR operator, and the circuit in Figure b is said to implement a logical OR function.

16. Below figure illustrates how three switches can be used to control the light in a more complex way. This series-parallel connection of switches realizes the logic function

L(x_1, x_2, x_3) = (x_1 + x_2) • x_3

The light is on if x3 = 1 and, at the same time, at least one of the x_1 or x2 inputs is equal to 1.

 

Fig. A series-parallel connection

Inversion: 

1. So far we have assumed that some positive action takes place when a switch is closed, such as turning the light on. It is equally interesting and useful to consider the possibility that a positive action takes place when a switch is opened.


 

Fig. An Inverting circuit.

2. In this case the switch is connected in parallel with the light, rather than in series. Consequently, a closed switch will short-circuit the light and prevent the current from flowing through it.

3. Note that we have included an extra resistor in this circuit to ensure that the closed switch does not short-circuit the power supply. The light will be turned on when the switch is opened.

4. Formally, we express this functional behavior as

L(x) = x

Where L = 1 if x = 0,

L = 0 if x = 1

5. The value of this function is the inverse of the value of the input variable. Instead of using the word inverse, it is more common to use the term complement. Thus we say that L(x) is a complement of x in this example. Another frequently used term for the same operation is the NOT operation.

6. There are several commonly used notations for indicating the complementation.

Thus the following are equivalent:

x ̅= x’ =! x = ∼ x = NOT x

Truth Tables:

1. All the logical operations can also defined in the form of table called a truth table.

2. As shown in Table the first two columns (to the left of the double vertical line) give all four possible combinations of logic values that the variables x1 and x2 can have. The next column defines the AND operation for each combination of values of x1 and x2, and the last column defines the OR operation

Table. A truth table for the AND and OR operation


x_1 x_2 x_1. x_2 (AND) x_1+ x_2 (OR)

0 0 0 0

0 1 0 1

1 0 0 1

1 1 1 1


3. The truth table is a useful aid for depicting information involving logic functions. We will use it to define specific functions and to show the validity of certain functional relations. 

4. Small truth tables are easy to deal with. However, they grow exponentially in size with the number of variables. A truth table for three input variables has eight rows because there are eight possible valuations of these variables.

5. In general, for n input variables the truth table has 2n rows.

1.5.2. Logic Gates and Networks:

1. A complex function may require many of the basic operations for its   implementation. Each logic operation can be implemented electronically with transistors, resulting in a circuit element called a logic gate.

2. A logic gate has one or more inputs and one output that is a function of its inputs. It is often convenient to describe a logic circuit by drawing a circuit diagram, or schematic, consisting of graphical symbols representing the logic gates.

3. The graphical symbols for the AND, OR, and NOT gates are shown in Figure 

4. The figure indicates on the left side how the AND and OR gates are drawn when there are only a few inputs. On the right side it shows how the symbols are augmented to accommodate a greater number of inputs.

5. A larger circuit is implemented by a network of gates.

6. For example, the logic function from Figure series-parallel connection can be implemented by the network in below figure. The complexity of a given network has a direct impact on its cost. Because it is always desirable to reduce the cost of any manufactured product, it is important to find ways for implementing logic circuits as inexpensively as possible.

7. A network of gates is often called a logic network or simply a logic circuit.


 

Fig. The basic gates



 

Fig. The function from series-parallel connection  

Analysis of a Logic Network:

1. For an existing logic network, it must be possible to determine the function performed by the network. This task is referred to as the analysis process.

2. The reverse task of designing a new network that implements a desired functional      behavior is referred to as the synthesis process.

3. The analysis process is rather straightforward and much simpler than the synthesis   process.

4. Figure shows a simple network consisting of three gates. To analyze its functional behavior, we can consider what happens if we apply all possible combinations of the input signals to it. 

5. Suppose that we start by making x_1 = x_2 = 0. This forces the output of the NOT gate to be equal to 1 and the output of the AND gate to be 0. Because one of the inputs to the OR gate is 1, the output of this gate will be 1

6. Therefore, f = 1 if x_1 = x2 = 0. If we then let x_1 = 0 and x_2 = 1, no change in the value of f will take place, because the outputs of the NOT and AND gates will still be 1 and 0, respectively. Next, if we apply x_1 = 1and x_2 = 0, then the output of the NOT gate changes to 0 while the output of the AND gate remains at 0. Both inputs to the OR gate are then equal to 0; hence the value of f will be 0.

7. Finally, let x_1 = x_2= 1. Then the output of the AND gate goes to 1, which in turn causes f to be equal to 1. Our verbal explanation can be summarized in the form of the truth table shown in Figure b.

 

Fig . An example of logic networks.

Timing Diagram:

1. We have determined the behavior of the network in Figure a by considering the four possible valuations of the inputs x_1 and x2. Suppose that the signals that correspond to these valuations are applied to the network in the order of our discussion; that is, (x_1, x_2) = (0, 0) followed by (0, 1), (1, 0), and (1, 1). Then changes in the signals at various points in the network would be as indicated in blue in the figure. The same information can be presented in graphical form, known as a timing diagram as shown in fig c

2. The time runs from left to right, and each input valuation is held for some fixed duration. The figure shows the waveforms for the inputs and output of the network, as well as for the internal signals at the points labeled A and B.

3. The timing diagram in Figure c shows that changes in the waveforms at points A and B and the output f take place instantaneously when the inputs x_1 and x_2 change their values. These idealized waveforms are based on the assumption that logic gates respond to changes on their inputs in zero time. Such timing diagrams are useful for indicating the functional behavior of logic circuits.

4. Practical logic gates are implemented using electronic circuits which need some time to change their states. Thus, there is a delay between a change in input values and a corresponding change in the output value of a gate.

5. Timing diagrams are used for many purposes. They depict the behavior of a logic circuit in a form that can be observed when the circuit is tested using instruments such as logic analyzers and oscilloscopes.

Functionally Equivalent Networks:

1. Now consider the network in Figure 2.10d. Going through the same analysis procedure, we find that the output g changes in exactly the same way as f does in part (a) of the figure. Therefore, g(x_1, x_2) = f (x_1, x_2), which indicates that the two networks are functionally equivalent.

2. The output behavior of both networks is represented by the truth table in Figure b.Since both networks realize the same function, it makes sense to use the simpler one, which is less costly to implement  

3. We have already established this equivalence through detailed analysis of the two circuits and construction of the truth table. But the same outcome can be achieved through algebraic manipulation of logic expressions.

1.6. BOOLEAN ALGEBRA:

1. In 1849 George Boole published a scheme for the algebraic description of processes involved in logical thought and reasoning. Subsequently, this scheme and its further refinements became known as Boolean algebra. It was almost 100 years later that this algebra found application in the engineering sense.

2. Algebra is a powerful tool that can be used for designing and analyzing logic circuits. It provides the foundation for much of our modern digital technology.

Axioms of Boolean algebra:

1. Like any algebra, Boolean algebra is based on a set of rules that are derived from a small number of basic assumptions. These assumptions are called axioms. Let us assume that Boolean algebra involves elements that take on one of two values, 0 and 1.

2. Assume that the following axioms are true:

1a. 0• 0 = 0

1b. 1+ 1 = 1

2a. 1• 1 = 1

2b. 0+ 0 = 0

3a. 0• 1 = 1 • 0 = 0

3b. 1+ 0 = 0 + 1 = 1

4a. If x = 0, then x ̅ = 1

4b. If x = 1, then x ̅ = 0

Single-Variable Theorems:

1. From the axioms we can define some rules for dealing with single variables. These rules are often called theorems.

2. If x is a Boolean variable, then the following theorems hold:

5a. x • 0 = 0

5b. x + 1 = 1

6a. x • 1 = x

6b. x + 0 = x

7a. x • x = x

7b. x + x = x

8a. x •x ̅ = 0

8b. x + x ̅ = 1

9. (x ̅  ) ̅ = x

3. It is easy to prove the validity of these theorems by perfect induction, that is, by substituting the values x = 0 and x = 1 into the expressions and using the axioms given above.

Duality:

1. Given a logic expression, its dual is obtained by replacing all + operators with • operators, and vice versa, and by replacing all 0s with 1s, and vice versa.

2. The dual of any true statement (axiom or theorem) in Boolean algebra is also a true statement

3. Duality implies that at least two different ways exist to express every logic function with Boolean algebra

4. Often, one expression leads to a simpler physical implementation than the other and is thus preferable.

Two- and Three-Variable Properties :

1. To enable us to deal with a number of variables, it is useful to define some two- and three-variable algebraic identities. For each identity, its dual version is also given. These identities are often referred to as properties.

2. They are known by the names indicated below.If x, y, and z are Boolean variables, then the following properties hold:

10a. x • y = y • x Commutative

10b. x + y = y + x

11a. x • ( y • z) = (x • y) • z Associative

11b. x + ( y + z) = (x + y) + z

12a. x • ( y + z) = x • y + x • z Distributive

12b. x + y • z = (x + y) • (x + z)

13a. x + x • y = x Absorption

13b. x • (x + y) = x

14a. x • y + x • y ̅ = x Combining

14b. (x + y) • (x + y ̅) = x

15a. (x.y) ̅  = x ̅ + y ̅ DeMorgan’s theorem

15b. (x+y) ̅  = x ̅ •( y) ̅

16a. x + x ̅ • y = x + y

16b. x • (x ̅ + y) = x • y

17a. x • y + y • z + x ̅  • z = x • y + x ̅  • z Consensus

17b. (x + y) • (y + z) • (x ̅  + z) = (x + y) • (x ̅  + z)

3. Again, we can prove the validity of these properties either by perfect induction or by performing algebraic manipulation.

4. Figure illustrates how perfect induction can be used to prove DeMorgan’s theorem, using the format of a truth table. The evaluation of left-hand and right-hand sides of the identity in 15a gives the same result.

 

Fig. Proof of DeMorgan’s theorem in 15a

5. We have  a number of axioms, theorems, and properties. Not all of these are necessary to define Boolean algebra.

6. The preceding axioms, theorems, and properties provide the information necessary for performing algebraic manipulation of more complex expressions.

Ex: Let us prove the validity of the logic equation 

       (x_1 + x_3) • ((x_1 ) ̅  + (x_3 ) ̅  ) = x_1• (x_3 ) ̅  + (x_1 ) ̅ •x_3

 Sol:  The left-hand side can be manipulated as follows. Using the distributive property, 12a, Gives  

LHS = (x_1 + x_3) •(x_1 ) ̅   + (x_1 + x_3) •(x_3 ) ̅  

Applying the distributive property again yields

LHS = x_1 •(x_1 ) ̅    + x_3 •(x_1 ) ̅   + x_1 •(x_3 ) ̅  + x_3 •(x_3 ) ̅  

Note that the distributive property allows ANDing the terms in parenthesis in a way analogous to multiplication in ordinary algebra. Next, according to theorem 8a, the terms x_1 •(x_1 ) ̅ and x_3 • (x_3 ) ̅ are both equal to 0. Therefore,

LHS = 0 + x_3  •(x_1 ) ̅ + x_1•(x_3 ) ̅   + 0

From 6b it follows that

LHS = x_3• (x_1 ) ̅+ x_1 •(x_3 ) ̅

Finally, using the commutative property, 10a and 10b, this becomes

LHS = x_1•(x_3 ) ̅ + (x_1 ) ̅•x_3

Which is the same as the right-hand side of the initial equation.

1.7. SYNTHESIS USING GATES:

1. With some basic ideas, we can implement arbitrary functions using the AND, OR, and NOT gates.

2. The function of the circuit is to continuously monitor the state of the switches and to produce an output logic value 1 whenever the switches (x1, x2) are in states (0, 0),(0, 1), or (1, 1). If the state of the switches is (1, 0), the output should be 0.

3. We can express the required behavior using a truth table, as shown in Figure 

 

Fig. A function to be synthesized

4.  A possible procedure for designing a logic circuit that implements this truth table is to create a product term that has a value of 1 for each valuation for which the output function f has to be 1.

5. Then we can take a logical sum of these product terms to realize f

6. Let us begin with the fourth row of the truth table, which corresponds to x_1 = x_2 = 1. The product term that is equal to 1 for this valuation is x_1 •x_2, which is just the AND of x_1 and x_2

7. Next consider the first row of the table, for which x_1 = x_2 = 0. For this valuation the value 1 is produced by the product term (x_1 ) ̅ •(x_2 ) ̅. Similarly, the second row leads to the term (x_1 ) ̅ •x_2.

8. Thus f may be realized as f (x1, x2) = x_1 x_2 +(x_1 ) ̅ (x_2 ) ̅ + (x_1 ) ̅ x_2

9. The logic network that corresponds to this expression is shown in Figure a

10. Although this network implements f correctly, it is not the simplest such network. To find a simpler network, we can manipulate the obtained expression using the theorems and properties.

11. According to theorem 7b, we can replicate any term in a logical sum expression. Replicating the third product term, the above expression becomes

f (x_1, x_2) = x_1 x_2 +(x_1 ) ̅ (x_2 ) ̅ + (x_1 ) ̅ x_2 + (x_1 ) ̅ x_2

12. Now the distributive property 12a allows us to write

f (x_1, x_2) = ( x_1+(x_1 ) ̅)x_2 + ( (x_2 ) ̅ +  x_2)(x_1 ) ̅


 

Fig. Two implementations of the function 

13. Applying theorem 8b we get

f (x_1, x_2) =  1. x_2 + 1. (x_1 ) ̅

14. Finally, theorem 6a leads to

f (x_1, x_2) =   x_2 +  (x_1 ) ̅

15. The network described by this expression is given in Figure b. Obviously; the cost of this network is much less than the cost of the network in part (a) of the figure.

16. Algebraic manipulation used to derive simplified logic expressions and thus lower-cost networks.

17. The process whereby we begin with a description of the desired functional behavior and then generate a circuit that realizes this behavior is called synthesis.Thus we can say that we “synthesized” the networks from the truth table 

Ex. From the given truth table derive the simplified function and implement the function

 

 Sol:    f =  (s_(1 ) ) ̅  s_(1 ) s_2  s_3  (s_(2 ) ) ̅    (s_(3 ) ) ̅

           f =  (s_(1 ) ) ̅  (s_(2 ) ) ̅  s_3  + (s_(1 ) ) ̅  s_2  s_3 + s_(1 ) (s_(2 ) ) ̅ s_3  + s_1 s_2 (s_(3 ) ) ̅  + s_(1 ) s_2  s_3

           We can first use rule 7b to repeat the term s_(1 ) s_2  s_3, and then use the distributive  property 12a and rule 8b to simplify the expression.

f =  (s_(1 ) ) ̅  (s_(2 ) ) ̅  s_3  + (s_(1 ) ) ̅  s_2  s_3 + s_(1 ) (s_(2 ) ) ̅ s_3  + s_(1 ) s_2  s_3 + s_1 s_2 (s_(3 ) ) ̅  + s_(1 ) s_2  s_3

f = (s_(1 ) ) ̅  s_3 ((s_(2 ) ) ̅  +  s_2 ) +  s_(1 ) s_3 ((s_(2 ) ) ̅  +  s_2 ) +  s_(1 ) s_2  ((s_(3 ) ) ̅  + s_3 )

f =  (s_(1 ) ) ̅  s_3  +  s_(1 ) s_3  +  s_(1 ) s_2

f =  s_3  ((s_(1 ) ) ̅  +  s_(1 ) ) + s_(1 ) s_2 

f =  s_3  +  s_(1 ) s_2

Implementation of the simplified function 

s_(1 )      s_2 s_(1 ) s_2 f =  s_3  +  s_(1 ) s_2

 

  s_3

Sum-of-Products and Product-of-Sums Forms :

1. If a function f is specified in the form of a truth table, then an expression that realizes f can be obtained by considering either the rows in the table for which f = 1,as we have already done, or by considering the rows for which f = 0.

Minterms:

1. For a function of n variables, a product term in which each of the n variables appears once is called a minterm. 

2. The variables may appear in a minterm either in uncomplemented or complemented form. For a given row of the truth table, the minterm is formed by including x_(i )if x_(i ) = 1 and by including x_(i )if (x_(i ) ) ̅   = 0.

3. To illustrate this concept, consider the truth table in Figure. We have numbered the rows of the table from 0 to 7, so that we can refer to them easily.

4. From the discussion of the binary number representation in previous section, we can observe that the row numbers chosen are just the numbers represented by the bit patterns of variables x_(1 ), x_(2 ), and x_(3 )

5. The figure shows all minterms for the three-variable table

 

Fig. Three-variable minterms and maxterms.

6. For example, in the first row the variables have the values x_(1 ) = x_(2 ) = x_(3 ) = 0, which leads to the minterm (x_(1 ) ) ̅ (x_(2 ) ) ̅ (x_(3 ) ) ̅. In the second row x_(1 )= x_(2 ) = 0 and x_(3 ) = 1, which gives the minterm (x_(1 ) ) ̅ (x_(2 ) ) ̅  x_(3 ), and so on.

7. To be able to refer to the individual minterms easily, it is convenient to identify each minterm by an index that corresponds to the row numbers shown in the figure. We will use the notation m_(i ) to denote the minterm for row number i. Thus m_(0 )  = (x_(1 ) ) ̅ (x_(2 ) ) ̅ (x_(3 ) ) ̅, m_(1 )  = (x_(1 ) ) ̅ (x_(2 ) ) ̅ x_(3 ), and so on.

Sum-of-Products Form :

1. Afunction f can be represented by an expression that is a sum of minterms, where each minterm is ANDed with the value of f for the corresponding valuation of input variables.

2. For example, the two-variable minterms are m_(0 )= (x_(1 ) ) ̅ (x_(2 ) ) ̅, m_(1 ) = (x_(1 ) ) ̅x_(2 ), m_(2 )= x_(1 ) (x_(2 ) ) ̅, and m_3 = x_(1 ) x_(2 ). The function in Figure can be represented as

f = m_(0 )•1 + m_(1 )•1 + m_(2 )• 0 + m_3• 1

    = m_(0 ) + m_(1 ) + m_3

= (x_(1 ) ) ̅ (x_(2 ) ) ̅ + (x_(1 ) ) ̅  x_(2 ) + x_(1 ) x_(2 )

3. Any function f can be represented by a sum of minterms that correspond to the rows in the truth table for which f = 1. The resulting implementation is functionally correct and unique, but it is not necessarily the lowest-cost implementation of  f.

4. A logic expression consisting of product (AND) terms that are summed (ORed) is said to be in the sum-of products (SOP) form. If each product term is a minterm, then the expression is called a canonical sum-of-products for the function f.

5. The first step in the synthesis process is to derive a canonical sum-of-products expression for the given function. Then we can manipulate this expression, using the theorems and properties, with the goal of finding a functionally equivalent sum-of-products expression that has a lower cost.

Ex: Consider the three-variable function f (x_(1 ),x_(2 ),x_(3 )), specified by the truth table in Figure and minimize the function and realize the logic diagram

 

Fig. A three-variable function

Sol: To synthesize this function, we have to include the minterms  m_(1  ,) m_(4 ,) m_5  and m_6 .Copying these minterms from Figure leads to the following canonical sum-of-products expression for f

f (x_(1 ),x_(2 ),x_(3 )) =  (x_1 ) ̅ (x_2 ) ̅ x_3  +x_1  (x_2 ) ̅  (x_3 ) ̅  + x_1  (x_2 ) ̅  x_3 +  x_1  x_2 (x_3 ) ̅  

      1. This expression can be manipulated as follows

f (x_(1 ),x_(2 ),x_(3 )) =  ((x_1 ) ̅+ x_1)(x_2 ) ̅ x_3  +( (x_2 ) ̅  + x_2 ) x_1  (x_3 ) ̅  

f (x_(1 ),x_(2 ),x_(3 )) =   1. (x_2 ) ̅ x_3 + 1. x_1  (x_3 ) ̅  

f (x_(1 ),x_(2 ),x_(3 )) =    (x_2 ) ̅ x_3 + x_1  (x_3 ) ̅  

      2. This is the minimum-cost sum-of-products expression for f . It describes the circuit shown in below figure. 

       3. A good indication of the cost of a logic circuit is the total number of gates plus the total number of inputs to all gates in the circuit.

 

Fig. A minimal sum-of-products realization

       4. A good indication of the cost of a logic circuit is the total number of gates plus the total number of inputs to all gates in the circuit. Using this measure, the cost of the network in Figure is 13, because there are five gates and eight inputs to the gates.

       5. By comparison, the network implemented on the basis of the canonical sum-of- products would have a cost of 27

        6. Minterms, with their row-number subscripts, can also be used to specify a given function in a more concise form

       7. For example, the above function can be specified

f (x_(1 ),x_(2 ),x_(3 )) = ∑▒〖(m_(1 ,) m_(4 ,) m_5,m_6 〗)

or even more simply as

f (x_(1 ),x_(2 ),x_(3 )) = ∑▒〖 m(1,4,5,6〗)

  The Ʃ sign denotes the logical sum operation. This shorthand notation is often used in 

practice.

Ex: Consider the function  f (x_(1 ),x_(2 ),x_(3 )) = ∑▒〖 m(2,3,4,6,7〗) 

The canonical SOP expression for the function is derived using minterms

f (x_(1 ),x_(2 ),x_(3 )) = m_2+〖 m〗_(3 )+m_4 〖+ m〗_6+ 〖 m〗_(7 )

f (x_(1 ),x_(2 ),x_(3 )) =  (x_1 ) ̅x_2 (x_3 ) ̅  +(x_1 ) ̅x_2 x_3 + x_1 (x_2 ) ̅  (x_3 ) ̅  +  x_1 x_2 (x_3 ) ̅  + x_1 x_2 x_3

  This expression can be simplified as follows

f (x_(1 ),x_(2 ),x_(3 )) =  (x_1 ) ̅x_2 (x_3 ) ̅  +(x_1 ) ̅x_2 x_3 + x_1 (x_2 ) ̅  (x_3 ) ̅  +  x_1 x_2 (x_3 ) ̅  + x_1 x_2 (x_3 ) ̅  +  x_1 x_2 x_3

  (since  x_1 x_2 (x_3 ) ̅  + x_1 x_2 (x_3 ) ̅  = x_1 x_2 (x_3 ) ̅   )

f (x_(1 ),x_(2 ),x_(3 )) =  (x_1 ) ̅x_2  ((x_3 ) ̅ + x_3) + x_1 (x_3 ) ̅   ( (x_2 ) ̅  + x_2)  + x_1 x_2 ((x_3 ) ̅  +x_3)

f (x_(1 ),x_(2 ),x_(3 )) =  (x_1 ) ̅x_2  + x_1 (x_3 ) ̅   + x_1 x_2

f (x_(1 ),x_(2 ),x_(3 )) =  ((x_1 ) ̅+x_1)x_2  + x_1 (x_3 ) ̅   

f (x_(1 ),x_(2 ),x_(3 )) =  x_2  + x_1 (x_3 ) ̅   

Ex: Minimize the given four –variable function 

f (x_(1 ),x_(2 ),x_(3 ),x_4) = ∑▒〖 m(3,7,9,12,13,14,15〗)

Maxterms:

1. The principle of duality suggests that if it is possible to synthesize a function f by considering the rows in the truth table for which f = 1, then it should also be possible to synthesize f by considering the rows for which f = 0. This alternative approach uses the complements of minterms, which are called maxterms.

2. All possible maxterms for three variable functions are listed in Figure. We will refer to a maxterm M_jby the same row number as its corresponding minterm mj as shown in the figure.

 

Fig. Three-variable minterms and maxterms.

Product-of-Sums Form:

1. If a given function f is specified by a truth table, then its complement f can be represented by a sum of minterms for which f = 1, which are the rows where f = 0. For example, for the function in below Figure 

 

(f (〖 x〗_1,〖 x〗_2) ) ̅ =    m_2  = 〖 x〗_1 (x_2 ) ̅ 

If we complement this expression using DeMorgan’s theorem, the result is (f  ) ̅  ̅  = f =  (〖 x〗_1  (x_2 ) ̅   ) ̅  =  (x_1 ) ̅ + 〖 x〗_2

 2. Note that we obtained this expression previously by algebraic manipulation of the canonical sum-of-products form for the function f . The key point here is that

f = (m_2 ) ̅  = 〖 M〗_2

            Where 〖 M〗_2 is the maxterm for row 2 in the truth table.

3. As another example, consider the function in below Figure. The complement of  this function can be represented as

 

Fig. A three-variable function

(f (〖 x〗_1,〖 x〗_2,〖 x〗_3) ) ̅ =    m_0+〖 m〗_(2 )+m_3 〖+ m〗_7

=     (x_1 ) ̅  (x_2 ) ̅  (x_3 ) ̅  +(x_1 ) ̅x_2 (x_3 ) ̅  + (x_1 ) ̅x_2 x_3+ x_1 x_2 x_3

Then f can be expressed as

f  =  (m_0+〖 m〗_(2 )+m_3 〖+ m〗_7  ) ̅

                             =   (m_0 ) ̅ .  (m_2 ) ̅ . (m_3 ) ̅   . (m_7 ) ̅

=    M_0 . M_2 . M_3 . M_7

                                   =  (x_1+x_2+x_3) (x_1+(x_2 ) ̅ +x_3) (x_1+ (x_2 ) ̅  + (x_3 ) ̅ )( (x_1 ) ̅+(x_2 ) ̅+ (x_3 ) ̅ )

This expression represents f as a product of maxterms.

4. A logic expression consisting of sum (OR) terms that are the factors of a logical product (AND) is said to be of the product-of-sums (POS) form. If each sum term is a maxterm, then the expression is called a canonical product-of-sums for the given function.

5. Any function f can be synthesized by finding its canonical product-of-sums. This involves taking the maxterm for each row in the truth table for which f = 0 and forming a product of these maxterms.

6. We can reduce the complexity of the derived expression that comprises a product of maxterms. Using the commutative property 10b and the associative property 11b ,this expression can be written as

f =   ( (x_1+x_3)+x_2) ((x_1+ x_3 )+ (x_2 ) ̅ ) (x_1+ ((x_2 ) ̅  + (x_3 ) ̅ ) )( (x_1 ) ̅+((x_2 ) ̅+ (x_3 ) ̅ ))

7. Then, using the combining property 14b, the expression reduces to

f =   (x_1+x_3)  ((x_2 ) ̅  + (x_3 ) ̅ ) 

8. The corresponding network is given in Figure 

 

Fig. A minimal product-of-sums realization

9. The cost of this network is 13

10. Using the shorthand notation, an alternative way of specifying our sample function is

f (x_(1 ),x_(2 ),x_(3 )) =   ∏▒〖(M_(0 ,) M_(2 ,) M_3,M_7 〗)

              =  ∏▒〖 M(0,2,3,7〗)

The ∏▒ sign denotes the logical product operation.

Ex: Consider the function f (x_(1 ),x_(2 ),x_(3 )) =Ʃ m(2, 3, 4, 6, 7)  Instead of using the minterms, we can specify this function as a product of maxterms for which f = 0, namely

f (x_(1 ),x_(2 ),x_(3 )) = ∏▒〖 M(0,1,5〗)

Sol: 1. Then, the canonical POS expression is derived as

f =  M_0 . M_1 . M_5 

f = (x_(1 )+x_(2 )+x_(3 )) ( x_(1 )+x_(2 )+(x_3 ) ̅) ( (x_1 ) ̅+x_(2 )+(x_3 ) ̅)

        2. A simplified POS expression can be derived as

f = (x_(1 )+x_(2 )+x_(3 )) ( x_(1 )+x_(2 )+(x_3 ) ̅) ( x_(1 )+x_(2 )+(x_3 ) ̅) ( (x_1 ) ̅+x_(2 )+(x_3 ) ̅)

f = ((x_(1 )+x_(2 ))+x_(3 )) (( x_(1 )+x_(2 ))+(x_3 ) ̅) ( x_(1 )+(x_(2 )+(x_3 ) ̅)) ( (x_1 ) ̅+ (x_(2 )+(x_3 ) ̅))

f = ((x_(1 )+x_(2 ))+ (x_3 ) ̅   x_(3 )) ( x_(1 ) (x_1 ) ̅+(x_(2 )+(x_3 ) ̅))

f = (x_(1 )+x_(2 ))(x_(2 )+(x_3 ) ̅)

       3. Another way of deriving this product-of-sums expression is to use the sum-of-products form of  f . Thus,

                 (f (〖 x〗_1,〖 x〗_2,〖 x〗_3) ) ̅  = ∑▒〖 m(0,1,5〗)

                                       =  (x_1 ) ̅  (x_2 ) ̅  (x_3 ) ̅  +(x_1 ) ̅(x_2 ) ̅  x_3+  (x_1 ) ̅(x_2 ) ̅  x_3+ x_1 (x_2 ) ̅  x_3

                           =  (x_1 ) ̅  (x_2 ) ̅ ((x_3 ) ̅ + x_3) + (x_2 ) ̅  x_3((x_1 ) ̅ + x_1 ) 

                           =  (x_1 ) ̅  (x_2 ) ̅   +  (x_2 ) ̅  x_3

        4. Now, first applying DeMorgan’s theorem 15b, and then applying 15a (twice) gives

                     (f  ) ̅  ̅  = f =   (((x_1 ) ̅  (x_2 ) ̅+ (x_2 ) ̅  x_3)) ̅

              =  ( ((x_1 ) ̅  (x_2 ) ̅)) ̅ (( (x_2 ) ̅  x_3)) ̅

             =  (x_1+x_2) (x_2 + (x_3 ) ̅  )


NAND and NOR Logic Networks:

1. NAND and NOR functions which are obtained by complementing the output generated by AND and OR operations, respectively.

2. These functions are attractive because they are implemented with simpler electronic circuits than the AND and OR functions.

3. Figure gives the graphical symbols for the NAND and NOR gates. A bubble is placed on the output side of the AND and OR gate symbols to represent the complemented output signal.

4. NAND and NOR gates are realized with simpler circuits than AND and OR gates

 

Fig. NAND and NOR gates

5. Its logic gate interpretation is shown in below Figure

 

Fig. DeMorgan’s theorem in terms of logic gates.

6. From DeMorgan’s theorem identity 15a is interpreted in part (a) of the figure. It specifies that a NAND of variables x1 and x2 is equivalent to first complementing each of the variables and then ORing them. Notice on the far-right side that we have indicated the NOT gates simply as bubbles, which denote inversion of the logic value at that point.

7. The other half of DeMorgan’s theorem, identity 15b, appears in part (b) of the figure. It states that the NOR function is equivalent to first inverting the input variables and then ANDing them.

8. Any logic function can be implemented either in sumof- products or product-of-sums form, which leads to logic networks that have either an AND-OR or an OR-AND structure, respectively. We will now show that such networks can be implemented using only NAND gates or only NOR gates.

9. Consider the network in below figure as a representative of general AND-OR networks. This network can be transformed into a network of NAND gates as shown in the figure.

 

Fig. Using NAND gates to implement a sum-of-products.

10. First, each connection between the AND gate and an OR gate is replaced by a connection that includes two inversions of the signal: one inversion at the output of the AND gate and the other at the input of the OR gate. Such double inversion has no effect on the behavior of the network.

11. The OR gate with inversions at its inputs is equivalent to a NAND gate. Thus we can redraw the network using only NAND gates.

12. This example shows that any AND-OR network can be implemented as a NAND-NAND network having the same topology.

13. Below figure gives a similar construction for a product-of-sums network, which can be transformed into a circuit with only NOR gates.

 

Fig. Using NOR gates to implement a product-of-sums.

14. The procedure is exactly the same but the AND gate with inversions at its inputs is equivalent to a NOR gate. Thus we can redraw the network using only NOR gates

15. The conclusion is that any OR-AND network can be implemented as a NOR-NOR network having the same topology.

Ex: Let us implement the function f (x_(1 ),x_(2 ),x_(3 )) =Ʃ m(2, 3, 4, 6, 7) using NOR gates only        This function can be represented by the POS expression as

f = (x_(1 ) + x_(2 ))( x_(2 ) + (x_3 ) ̅)  

An OR-AND circuit that corresponds to this expression is shown in Figure a. Using the same structure of the circuit, a NOR-gate version is given in Figure b. Note that x3 Is inverted by a NOR gate that has its inputs tied together.

 

Fig. NOR-gate realization of the function

Ex: Let us implement the function f (x_(1 ),x_(2 ),x_(3 )) =Ʃ m(2, 3, 4, 6, 7) using NAND gates only   The SOP expression of given function  f =  x_(2 ) + x_(1 ) (x_3 ) ̅

1. An AND-OR circuit that corresponds to this expression is shown in Figure a. Using the same structure of the circuit, a NAND -gate version is given in Figure b.

 

Fig. NAND-gate realization of the function

2. The only difference is signal x_(2 )  passes only through an OR gate, instead of passing through an AND gate and an OR gate.

3. If we simply replace the OR gate with a NAND gate, this signal would be inverted which would result in a wrong output value. Since x_(2 ) must either not be inverted, or it can be inverted twice, we can pass it through two NAND gates as shown in Figure b

   Observe that for this circuit the output f is 

  f =  ( (x_2 ) ̅  .(( x_1 (x_3 ) ̅  )) ̅ ) ̅ 

Applying DeMorgan’s theorem, this expression becomes

f =  x_(2 ) + x_1 (x_3 ) ̅

1. 8. DESIGN EXAMPLES:

1. Logic circuits provide a solution to a problem. They implement functions that are needed to carry out specific tasks. Within the framework of a computer, logic circuits provide complete capability for execution of programs and processing of data.

2. A designer of logic circuits is always deal with the same basic issues. First, it is necessary to specify the desired behavior of the circuit. Second, the circuit has to be synthesized and implemented. Finally, the implemented circuit has to be tested to verify that it meets the specifications.

Ex: Three-Way Light Control

1. Assume that a large room has three doors and that a switch near each door controls a light in the room. It has to be possible to turn the light on or off by changing the state of any one of the switches.

2. As a first step, let us turn this word statement into a formal specification using a truth table. Let x1, x2, and x3 be the input variables that denote the state of each switch.

3. Assume that the light is off if all switches are open. Closing any one of the switches will turn the light on. Then turning on a second switch will have to turn off the light. Thus the light will be on if exactly one switch is closed, and it will be off if two (or no) switches are closed. If the light is off when two switches are closed, then it must be possible to turn it on by closing the third switch.

4. If f (x_(1 ),x_(2 ),x_(3 )) represents the state of the light, then the required functional behavior can be specified as shown in the truth table in Figure.

 

Fig. Truth table for the three-way light control.

5. The canonical sum-of-products expression for the specified function is

        f  =  m_1+〖 m〗_(2 )+m_4 〖+ m〗_7

            =   (x_1 ) ̅  (x_2 ) ̅  x_3+ (x_1 ) ̅x_2 (x_3 ) ̅  + x_1 (x_2 ) ̅  (x_3 ) ̅  + x_1 x_2 x_3

6. This expression cannot be simplified into a lower-cost sum-of-products expression. The resulting circuit is shown in Figure a.

7. An alternative realization for this function is in the product-of-sums form. The canonical expression of this type is 

f =  M_0 . M_3 . M_5 . M_6

=  (x_1+x_2+ x_3 ) (x_1+(x_2 ) ̅+ (x_3 ) ̅ )( (x_1 ) ̅+x_2+(x_3 ) ̅ )( (x_1 ) ̅+ (x_2 ) ̅+ x_3)

8. The resulting circuit is depicted in Figure b

 

Fig. Implementation of the function

1.9. OPTIMIZED IMPLEMENTATION OF LOGIC FUNCTIONS USING K-MAP:

1. In general, it is not obvious when to apply these theorems and properties to find a minimum-cost circuit, and it is often tedious and impractical to do so. This section introduces a more manageable approach, call the Karnaugh map, which provides a systematic way of producing a minimum-cost logic expression.

2. The Karnaugh map is an alternative to the truth-table form for representing a function. The map consists of cells that correspond to the rows of the truth table.

3. Consider the two-variable example in Figure. Part (a) depicts the truth-table form, where each of the four rows is identified by a minterm. Part (b) shows the Karnaugh map, which has four cells.

4. The columns of the map are labeled by the value of x1, and the rows are labeled by x2. This labeling leads to the locations of minterms as shown in the figure 

5. Compared to the truth table, the advantage of the Karnaugh map is that it allows easy recognition of minterms that can be combined using property 14a.

 

Fig . Location of two-variable minterms.

6. Minterms in any two cells that are adjacent, either in the same row or the same column, can be combined. For example, the minterms m2 and m3 can be combined as

m_2+〖 m〗_(3 )       =   x_1 (x_2 ) ̅   + x_1 x_2

=  x_1 ((x_2 ) ̅+x_2 )

=   x_1 . 1 =   x_1

7. The Karnaugh map is not just useful for combining pairs of minterms. the Karnaugh map can be used directly to derive a minimum-cost circuit for a logic function.

Two-Variable Map:

1. A Karnaugh map for a two-variable function is given in Figure. It corresponds to the function f shown in the truth table

 

 

Fig. The function of above table 

2. The value of f for each valuation of the variables x_1and x_2 is indicated in the corresponding cell of the map. Because a 1 appears in both cells of the bottom row and these cells are adjacent, there exists a single product term that can cause f to be equal to 1 when the input variables have the values that correspond to either of these cells

3. To indicate this fact, we have circled the cell entries in the map. Rather than using the combining property formally, we can derive the product term without thinking.

4. Both of the cells are identified by x_2 = 1, but x_1 = 0 for the left cell and x_1 = 1 for the right cell. Thus if x_2  = 1, then f = 1 regardless of whether x_1  is equal to 0 or 1. The product term representing the two cells is simply x_2.

5. Similarly, f = 1 for both cells in the first column. These cells are identified by x_1  = 0. Therefore, they lead to the product term (x_1 ) ̅. Since this takes care of all instances where f = 1, it follows that the minimum-cost realization of the function is

f  = x_2 + (x_1 ) ̅

6. Evidently, to find a minimum-cost implementation of a given function, it is necessary to find the smallest number of product terms that produce a value of 1 for all cases where f = 1.

7. Note that a product term that covers two adjacent cells is cheaper to implement than a term that covers only a single cell.

Three-Variable Map:

1. A three-variable Karnaugh map is constructed by placing 2 two-variable maps side by side. Figure a lists all of the three-variable minterms, and part (b) of the figure indicates the locations of these minterms in the Karnaugh map.

 

Fig. Location of three-variable minterms.

2. In this case each valuation of x_1   and 〖 x〗_2  identifies a column in the map, while the value of x_3   distinguishes the two rows. To ensure that minterms in the adjacent cells in the map can always be combined into a single product term, the adjacent cells must differ in the value of only one variable

3. Thus the columns are identified by the sequence of (x1, x2) values of 00, 01, 11, and 10, rather than the more obvious 00, 01, 10, and 11. This makes the second and third columns different only in variable x1. Also, the first and the fourth columns differ only in variable x1, which means that these columns can be considered as being adjacent.

4. It is useful to visualize the map as a rectangle folded into a cylinder where the left and the right edges in Figure  are made to touch. (A sequence of codes, or valuations, where consecutive codes differ in one variable only is known as the Gray code.)

5. Figure shows the function and function in Karnaugh-map form. To synthesize this function, it is necessary to cover the four 1s in the map as efficiently as possible.

  

(a) 

  

(b) 

Fig. Examples of three-variable Karnaugh maps

6. The first covers the 1s in the top row, which are represented by the term x_1 (x_3 ) ̅ . The second term is (x_2 ) ̅  x_3 which covers the 1s in the bottom row. Hence the function is implemented as

f = x_1 (x_3 ) ̅ + (x_2 ) ̅  x_3

7. In a three-variable map it is possible to combine cells to produce product terms that correspond to a single cell, two adjacent cells, or a group of four adjacent cells.

8. Realization of a group of four adjacent cells using a single product term is illustrated in Figure b, using the function shown in the same Figure b.

9. The four cells in the top row correspond to the f (x_1, x_2, x_3) valuations 000, 010, 110, and 100. As we discussed before, this indicates that if x_3 = 0, then f = 1 for all four possible valuations of x_1 and x_2, which means that the only requirement is that x_3 = 0. Therefore, the product term (x_3 ) ̅  represents these four cells.

10. The remaining 1, corresponding to minterm m_5, is best covered by the term x_1 (x_2 ) ̅ , obtained by combining the two cells in the right-most column. The complete realization of f is

f = (x_3 ) ̅ +x_1 (x_2 ) ̅

11. It is also possible to have a group of eight 1s in a three-variable map. This is the trivial case of a function where f = 1 for all valuations of input variables; in other words, f is equal to the constant 1

Four-Variable Map:

1. A four-variable map is constructed by placing 2 three-variable maps together to create four rows in the same fashion as we used 2 two-variable maps to form the four columns in a three-variable map.

2. Figure shows the structure of the four-variable map and the location of minterms.

 

Fig. A four-variable Karnaugh map.

3. From the fig thus x_1 = 1 for the two right-most columns, x_2 = 1 for the two middle columns, x_3 = 1 for the bottom two rows, and x_4 = 1 for the two middle rows.

4. Below figure gives four examples of four-variable functions. The function f_1 has a group of four 1s in adjacent cells in the bottom two rows, for which x_2  = 0 and x_2  = 1,they are represented by the product term (x_2 ) ̅x_3. 

5. This leaves the two 1s in the second row to be covered, which can be accomplished with the term x_1 (x_3 ) ̅x_4. Hence the minimum-cost implementation of the function is 

f_1 = (x_2 ) ̅x_3 + x_1 (x_3 ) ̅x_4

 

Fig. Examples of four-variable Karnaugh maps.

6. The function f_2 includes a group of eight 1s that can be implemented by a single term, x3.

7. Note that if the remaining two 1s were implemented as a group   of two, the result would be the product term x_1 (x_3 ) ̅x_4. Implementing these 1s as a part of a group of four 1s, as shown in the figure, gives the less expensive product term x_1 x_4.

8. Just as the left and the right edges of the map are adjacent in terms of the assignment of the variables, so are the top and the bottom edges. Indeed, the four corners of the map are adjacent to each other.

9. The four corners of the map are adjacent to each other and thus can form a group of four 1s, which may be implemented by the product term (x_2 ) ̅  (x_4 ) ̅. This case is depicted by the function f_3.

10. In addition to this group of 1s, there are four other 1s that must be covered to implementf_3. This can be done as shown in the figure.

11. The function f_(4 ) provides an example where there is some choice. The groups of four 1s in the top-left and bottom-right corners of the map are realized by the terms(x_1 ) ̅  (x_3 ) ̅ and x_1 x_3, respectively.

This leaves the two 1s that correspond to the term x_1 x_2 (x_3 ) ̅. But these two 1s can be realized more economically by treating them as a part of a group of four 1s.

12. They can be included in two different groups of four, as shown in the figure. One choice leads to the product term x_1 x_2, and the other leads to x_2 (x_3 ) ̅. Both of these terms have the same cost; hence it does not matter which one is chosen in the final circuit.

13. Note that the complement of x3 in the term x_2 (x_3 ) ̅ does not imply an increased cost in comparison with x1x2, because this complement must be generated anyway to produce the term (x_1 ) ̅  (x_3 ) ̅, which is included in the implementation.

Five-Variable Map:

1. We can use 2 four-variable maps to construct a five-variable map. It is easy to imagine a structure where one map is directly behind the other, and they are distinguished by x_5 = 0 for one map and x_5  = 1 for the other map. Since such a structure is awkward to draw, we can simply place the two maps side by side as shown in Figure

 

Fig. A five-variable Karnaugh map.

2. For the logic function given in this example, two groups of four 1s appear in the same place in both four-variable maps; hence their realization does not depend on the value of x_5. The same is true for the two groups of two 1s in the second row.

3. The 1 in the top-right corner appears only in the right map, where x_5 = 1; it is a part of the group of two 1s realized by the term x_1 (x_2 ) ̅  (x_3 ) ̅x_5 .

4. Using a five-variable map is obviously more awkward than using maps with fewer variables.

5.  Extending the Karnaugh map concept to more variables is not useful from the practical point of view.

Strategy for Minimization:

1. For minimization of the logic function we are going to use an approach to decide how the 1s in a Karnaugh map should be grouped together to obtain the minimum-cost implementation of a given function.

2. Our strategy was to find as few as possible and as large as possible groups of 1s that cover all cases where the function has a value of 1.

3. Each group of 1s has to comprise cells that can be represented by a single product term. The larger the group of 1s, the fewer the number of variables in the corresponding product term.

Terminology:

To facilitate the presentation of the results, certain terminology has evolved that avoids the need for using highly descriptive phrases. We define some of this terminology in the following paragraphs because it is useful for describing the minimization process.

Literal:

A given product term consists of some number of variables, each of which may appear either in uncomplemented or complemented form. Each appearance of a variable, either                  uncomplemented or complemented, is called a literal. For example, the product term x_1 (x_2 ) ̅x_3 has three literals, and the term (x_1 ) ̅x_3  (x_4 ) ̅x_6 has four literals

Implicant

1. A product term that indicates the input valuation(s) for which a given function is equal to 1 is called an implicant of the function. The most basic implicants are the minterms, For an n-variable function, a minterm is an implicant that consists of n literals.

2. Consider the three-variable function in Figure. There are 11 implicants for this function. This includes the five minterms: (x_1 ) ̅  (x_2 ) ̅  (x_3 ) ̅  , (x_1 ) ̅  (x_2 ) ̅  x_3  , (x_1 ) ̅  x_(2 ) (x_3 ) ̅, (x_1 ) ̅  x_(2 ) x_3, and x_1 x_(2 ) x_3. 

 

Fig. Three-variable function f (x_1,x_(2 ),x_3) = Ʃ m(0, 1, 2, 3, 7).

3. Then there are the implicants that correspond to all possible pairs of minterms that can be combined, namely, (x_1 ) ̅  (x_2 ) ̅ (m_0and m_1), (x_1 ) ̅   (x_3 ) ̅ (m_0and m_2), (x_1 ) ̅  x_3 (m_1and m_3), (x_1 ) ̅  x_(2 ) (m_2and m_3), and x_(2 ) x_3 (m_3and m_7). Finally, there is one implicant that covers a group of four minterms, which consists of a single literal x1.

Prime Implicant:

1. An implicant is called a prime implicant if it cannot be combined into another implicant that has fewer literals. Another way of stating this definition is to say that it is impossible to delete any literal in a prime implicant and still have a valid implicant.

2. In above figure there are two prime implicants: (x_1 ) ̅ and x_(2 ) x_3. It is not possible to delete a literal in either of them.

3. Another way of thinking about prime implicants is that they represent “the largest groups of 1s” that can be circled in the Karnaugh map.

Cover

A collection of implicants that account for all valuations for which a given function is equal to 1 is called a cover of that function.

Cost

Cost of a logic circuit is the number of gates plus the total number of inputs to all gates in the circuit.

Essential Prime Implicant:

If a prime implicant includes a minterm for which f = 1 that is not included in any other prime implicant, then it must be included in the cover and is called an essential prime implicant. In the example in above Figure both prime implicants are essential.

Incompletely Specified Functions:

1. In digital systems it often happens that certain input conditions can never occur. For example, suppose that x_1 and x_(2 ) control two interlocked switches such that both switches cannot be closed at the same time.

2. Thus the only three possible states of the switches are that both switches are open or that one switch is open and the other switch is closed.

3. Namely, the input valuations (x_(1 ),x_2) = 00, 01, and 10 are possible, but 11 is guaranteed not to occur. Then we say that (x_(1 ),x_2) = 11 is a don’t-care condition, meaning that a circuit with x_1 and x_(2 ) as inputs can be designed by ignoring this condition

4. Don’t-care conditions, or don’t-cares for short, can be used to advantage in the design of logic circuits. Since these input valuations will never occur, the designer may assume that the function value for these valuations is either 1 or 0, whichever is more useful in trying to find a minimum-cost implementation.

5. Below figure illustrates this idea. The required function has a value of 1 for minterms m_2 , m_4, m_5, m_6, and m_10. Assuming the abovementioned interlocked switches, the x_1 and x_(2 )inputs will never be equal to 1 at the same time; hence the minterms m_12 , m_13, m_14 and  m_15 can all be used as don’t-cares. The don’t cares are denoted by the letter d in the map. Using the shorthand notation, the function f is specified as

f (x_1, . . . , x_(4 )) =  Ʃm(2, 4, 5, 6, 10) + D(12, 13, 14, 15)

          Where D is the set of don’t-cares.

 

Fig. Two implementations of the function f (x_1, . . . , x_(4 )) =  

Ʃm(2, 4, 5, 6, 10) + D(12, 13, 14, 15)

6. Part (a) of the figure indicates the best sum-of-products implementation. To form the largest possible groups of 1s, thus generating the lowest-cost prime implicants, it is necessary to assume that the don’t-cares D12, D13, and D14 (corresponding to minterms m12, m13, and m14) have the value of 1 while D15 has the value of 0

7. Then there are only two prime implicants, which provide a complete cover of f . The resulting implementation is

f = x2(x_3 ) ̅ + x3(x_4 ) ̅

8. Part (b) shows how the best product-of-sums implementation can be obtained. The same values are assumed for the don’t cares. The result is

f = (x2+x_3 ) (  (x_3 ) ̅  +  (x_4 ) ̅ )

9. The freedom in choosing the value of don’t-cares leads to greatly simplified realizations. If we were to naively exclude the don’t-cares from the synthesis of the function, by assuming that they always have a value of 0, the resulting SOP expression would be

f = (x_1 ) ̅ x_2  (x_3 ) ̅   + (x_1 ) ̅ x_3  (x_4 ) ̅    + (x_2 ) ̅ x_3  (x_4 ) ̅

and the POS expression would be

f = (x2+x_3 ) (  (x_3 ) ̅  +  (x_4 ) ̅ ) (x1 +  (x_2 ) ̅  )

10. Both of these expressions have higher costs than the expressions obtained with a more appropriate assignment of values to don’t-cares.

11. Although don’t-care values can be assigned arbitrarily, an arbitrary assignment may not lead to a minimum-cost implementation of a given function. 

12. If there are k don’t-cares, then there are 2k possible ways of assigning 0 or 1 value to them. In the Karnaugh map we can usually see how best to do this assignment to find the simplest implementation.

1.10. MINIMIZATION USING QUINE -MCCLUSKEY (TABULAR) METHOD:

The K-map method is suitable for simplification of Boolean functions up to 5 or 6 variables. As the number of variables increases beyond this, the visualization of adjacent squares is difficult as the geometry is more involved. The ‘Quine-Mccluskey’ or ‘Tabular’ method is employed in such cases. This is a systematic step by step procedure for minimizing a Boolean expression in standard form.

Procedure for Finding the Minimal Expression:

Arrange all minterms in groups, such that all terms in the same group have same number of 1’s in their binary representation. Start with the least number of 1’s and continue with grouping of increasing number of 1’s, the number of 1’s in each term is called the index of that term i.e., all the minterms of same index are placed in a same group. The lowest value of index is zero. Separate each group by a thick line. This constitutes the I stage.

Compare every term of the lowest index (say i) group with each term in the successive group of index (say, i + 1). If two minterms differ in only one variable, that variable should be removed and a dash (–) is placed at the position, thus a new term with one less literal is formed. If such a situation occurs, a check mark (✔) is placed next to both minterms. After all pairs of terms with indices i and (i + 1) have been considered, a thick line is drawn under the last terms. When the above process has been repeated for all the groups of I stage, one stage of elimination have been completed. This constitutes the II stage.

The III stage of elimination should be repeated of the newly formed groups of second stage. In this stage, two terms can be compared only when they have dashes in same positions. The process continues to next higher stages until no further comparisons are possible. (i.e., no further elimination of literals).

All terms which remain unchecked (No ✔ sign) during the process are considered to be prime implicants (PIs). Thus, a set of all PIs of the function is obtained.

From the set of all prime implicates, a set of essential prime implicants (EPIs) must be determined by preparing prime implicant chart as follow.

The PIs should be represented in rows and each minterm of the function in a column.

Crosses should be placed in each row corresponding to minterms that makes the PIs.

A complete PIs chart should be inspected for columns containing only a single cross. PIs that cover minterms with a single cross in their column are called EPIs.

The minterms which are not covered by the EPIs are taken into consideration and a minimum cover is obtained from the remaining PIs.

If don't care conditions are given, they are also used to find the prime implicating, but it is not compulsory to include them in the final simplified expression.

Example 1. Simplify the given function using tabular method.

                                F (A, B, C, D) = ∑ m(0, 2, 3, 6, 7, 8, 10, 12, 13)

Solution. 

1. The minterms of the function are represented in binary form. The binary

represented are grouped into a number of sections in terms of the number of 1’s index as

shown in Table.

 

2. Compare each binary term with every term in the adjacent next higher category.

If they differ only by one position put a check mark and copy the term into the next

column with (–) in the place where the variable is unmatched, which is shown in

next Table.

 

3. Apply same process to the above resultant column of Table and continue until

no further elimination of literals. This is shown in Table.

 

4. All terms which remain unchecked are the PIs. However note that the minterms combination (0, 2) and (8, 10) form the same combination (0, 2, 8, 10) as the combination (0, 8) and (2. 10). The order in which these combinations are placed does not prove any effect. Moreover, as we know that x + x = x, thus, we can eliminate one of these combinations. The same occur with combination (2, 3) and (6, 7).

5. Now we prepare a PI chart to determine EPIs as follows shown in Table.

 

All the PIs are represented in rows and each minterm of the function in a

column.

Crosses are placed in each row to show the composition of minterms that

make PIs.

The columns that contains a single cross, the PI corresponding to the row in which the cross appear is essential. Prime implicant. A tick mark is part against each column which has only one cross mark. A star (*) mark is placed against each. EPI.

6. All the minterms have been covered by EPIs.

Finally, the sum of all the EPIs gives the function in its minimal SOP form

 

Therefore,                                      F = ABC' + B'D' + A'C.

Example 2. Simplify the given function using tabular method.

                                 F(A, B, C, D) = ∑ (0, 2, 3, 6,7)+d (5, 8, 10, 11, 15)

Solution: 

1. The minterms of the function are represented in binary form. The binary

represented are grouped into a number of sections in terms of the number of 1’s index as

shown in Table. The don’t care minterms are also included.

 

2. Compare each binary term with every term in the adjacent next higher category.

If they differ only by one position put a check mark and copy the term into the next

column with (–) in the place where the variable is unmatched, which is shown in table.


 

3. Apply same process to the above resultant column of Table and continue until

no further elimination of literals. This is shown in Table.

 

4. All the terms which remain unchecked are PIs. Moreover, one of two same combinations is eliminated.

5. Step 5 is to prepare a PI chart to determine EPIs as shown in Table.

Note, however, that don’t care minterms will not be listed as column headings in

the chart as they do not have to be covered by the minimal (simplified) expression.

 

6. All the minterms have been covered by EPIs.

        Therefore    F (A, B, C, D) = B'D' + A'C

















Monday, October 12, 2020

 

UNIT-II

2.1 NUMBER REPRESENTATION:

2.1.1. Positional Number Representation:

Unsigned Integers:

The simplest numbers to consider are the integers. We will begin by considering positive integers and then expand the discussion to include negative integers. Numbers that are positive only are called unsigned, and numbers that can also be negative are called signed.

            An n-bit unsigned binary number B = bn−1  bn−2 · · · b1b0

            Represents an integer that has the value

V (B) =

                                             = 

Octal and Hexadecimal Representations:

1. The positional number representation can be used for any radix. If the radix is r, then the number

K = k n−1 k n−2 · · · k 1 k 0

            Has the value

V(K)  =

2. Our interest is limited to those radices that are most practical. We will use decimal numbers because they are used by people, and we will use binary numbers because they are used by computers.

3. In addition, two other radices are useful—8 and 16. Numbers represented with radix 8 are called octal numbers, while radix-16 numbers are called hexadecimal numbers.

4. In octal representation the digit values range from 0 to 7. In hexadecimal representation (often abbreviated as hex), each digit can have one of 16 values. The first ten are denoted the same as in the decimal system, namely, 0 to 9. Digits that correspond to the decimal values 10, 11, 12, 13, 14, and 15 are denoted by the letters, A, B, C, D, E, and F.

5. In computers the dominant number system is binary. The reason for using the octal and hexadecimal systems is that they serve as a useful shorthand notation for binary numbers.

6. Below table gives the first 18 integers in these number systems.

          

 

Table. Numbers in different systems.

7. One octal digit represents three binary bits. Thus a binary number is converted into an octal number by taking groups of three bits, starting from the least-significant bit, and replacing them with the corresponding octal digit. For example, 101011010111 is converted as

                                                 101         011          010            111

                                                 5              3               2                 7                                        

8. Which means that (101011010111)2 = (5327)8.

9. If the number of bits is not a multiple of three, then we add 0s to the left of the most-significant bit. For example, (10111011)2 = (273)8 because of the grouping

10. Conversion from octal to binary is just as straightforward; each octal digit is simply replaced by three bits that denote the same value.

11. Similarly, a hexadecimal digit represents four bits. For example, a 16-bit number is represented by four hex digits, as in

(1010111100100101)2 = (AF25)16

                                                Using the grouping

12. Zeros are added to the left of the most-significant bit if the number of bits is not a multiple of four. For example, (1101101000)2 = (368)16 because of the grouping

13. Conversion from hexadecimal to binary involves straightforward substitution of each hex digit by four bits that denote the same value.

14. Binary numbers used in modern computers often have 32 or 64 bits. Written as binary n-tuples (sometimes called bit vectors), such numbers are awkward for people to deal with. It is much simpler to deal with them in the form of 8- or 16-digit hex numbers.

2.1.2. Addition of Unsigned Numbers:

1. Binary addition is performed in the same way as decimal addition except that the values of  individual digits can be only 0 or 1.

2. The one-bit addition entails four possible combinations, as indicated in Figure a. Two bits are needed to represent the result of the addition. The right-most bit is called the sum, s. The left-most bit, which is produced as a carry-out when both bits being added are equal to 1, is called the carry,

Fig. One bit addition

3. The addition operation is defined in the form of a truth table in part (b) of the figure

4. A more interesting case is when larger numbers that have multiple bits are involved. Then it is still necessary to add each pair of bits, but for each bit position i, the addition operation may include a carry-in from bit position i − 1.

5. Below figure .a presents an example of the addition operation. The two operands are X = (01111)2 = (15)10 and Y = (01010)2 = (10)10.

6. Five bits are used to represent X and Y, making it possible to represent integers in the range from 0 to 31; hence the sum S = X + Y = (25)10 can also be denoted as a five-bit integer.

7. Note the labeling of individual bits, such that X = x4 x3 x2 x1 x0 and Y = y4 y3 y2 y1 y0. The figure shows, in a pink color, the carries generated during the addition process. For example, a carry of 0 is generated when x0 and y0 are added; a carry of 1 is produced when x1 and y1 are added, and so on.

Fig. Addition of multibit numbers

8. For bit position 0, there is no carry-in,for each other bit position i, the addition involves bits xi and yi , and a carry-in ci , as illustrated in Figure b. This observation leads to the design of a logic circuit that has three inputs xi , yi , and ci , and produces the two outputs si and ci+1.

2.1.3. Signed Numbers:

1. In the decimal system the sign of a number is indicated by a + or − symbol to the left of the most-significant digit. In the binary system the sign of a number is denoted by the  left-most bit. For a positive number the left-most bit is equal to 0, and for a negative number it is equal to 1.

2. Therefore, in signed numbers the left-most bit represents the sign, and the remaining n − 1 bits represent the magnitude,

3. In unsigned numbers all bits represent the magnitude of a number; hence all n bits are significant in defining the magnitude. Therefore, the MSB is the left-most bit, bn−1. In signed numbers there are n − 1 significant bits, and the MSB is in bit position bn−2.

        

Fig. Formats for representation of integers

Negative Numbers:

Negative numbers can be represented in three different ways:

1. Sign-and-magnitude

2. 1’s complement

3.  2’s complement

1. Sign-and-Magnitude Representation:

 1. In the familiar decimal representation, the magnitude of both positive and negative   numbers is expressed in the same way. The sign symbol distinguishes a number as  being positive or negative. This scheme is called the sign-and-magnitude number representation.

2. The same scheme can be used with binary numbers in which case the sign bit is 0 or 1 for positive or negative numbers, respectively. For example, if we use four-bit numbers, then +5 = 0101 and −5 = 1101.

3.  This representation is not well suited for use in computers. More suitable representations are based on complementary systems.

1’s Complement Representation :

1. In a complementary number system, the negative numbers are defined according to a subtraction operation involving positive numbers. We will consider two schemes for binary numbers: the 1’s complement and the 2’s complement.

2. In the 1’s complement scheme, an n-bit negative number, K, is obtained by subtracting itsequivalent positive number, P, from 2n − 1; that is, K = (2n − 1) P.

3. For example, if n = 4

            Then                 K = (24 − 1) P

                                         = (15)10P = (1111)2P

3. If we convert +5 to a negative

            We get               −5 = 1111 − 0101 = 1010

            Similarly             +3 = 0011

            And                  −3 = 1111 − 0011 = 1100

4. Clearly, the 1’s complement can be obtained simply by complementing each bit of the number, including the sign bit.

5. While 1’s complement numbers are easy to derive, they have some drawbacks when used in arithmetic operations, as we will see in the next section.

2’s Complement Representation

1. In the 2’s complement scheme, a negative number, K, is obtained by subtracting its equivalent positive number, P, from 2n; namely, K = 2n − P.

            Using our four-bit example

                                                 −5 = 10000 − 0101 = 1011

             And                              −3 = 10000 − 0011 = 1101

2. Finding 2’s complements in this manner requires performing a subtraction operation that involves borrows.

3. However, we can observe that if K1 is the 1’s complement of P and K2 is the 2’s complement of P, then

                                                K1 = (2n − 1) – P

                                                K2 = 2n – P

4. It follows that K2 = K1 + 1. Thus a simpler way of finding a 2’s complement of a number is to add 1 to its 1’s complement because finding a 1’s complement is trivial.

5. This is how 2’s complement numbers are obtained in logic circuits that perform arithmetic operations. There is a simple rule that can be used for this purpose.

Rule for Finding 2’s Complements

1. Given a number B = bn−1 bn−2 · · · b1b0, its 2’s complement, K = kn−1 kn−2 · · · k1 k0, can be found by examining the bits of B from right to left and taking the following action: copy all bits of B that are 0 and the first bit that is 1; then simply complement the rest of the bits.

2. For example, if B = 0110, then we copy k0 = b0 = 0 and k1 = b1 = 1, and complement the rest so that k2 = b2 = 0 and k3 = b3 = 1. Hence K = 1010.

3. As another example, if B = 10110100 then we copy k0 = b0 = 0,k1 = b1 = 0, k2 = b2 = 1  and  complement the rest so that k3 = b3 = 1  and k4 = b4 = 0 and k5 = b5 = 0 and k6 = b6 = 1  and k7 = b7 = 0, Then K = 01001100.

4. Below table illustrates the interpretation of all 16 four-bit patterns in the three signed number representations that we have considered.

Table. Interpretation of four-bit signed integers.

5. Note that for both sign-and-magnitude representation and for 1’s complement representation there are two patterns that represent the value zero.

6. For 2’s complement there is only one such pattern. Also, observe that the range of numbers that can be represented with four bits in 2’s complement form is −8 to +7, while in the other two representations it is −7 to +7.

7. Using 2’s-complement representation, an n-bit number B = bn−1 bn−2 · · · b1b0 represents the value

             V(B) = (bn-1 × 2n-1) + b n-2 × 2n-2  +· · ·+ b1 × 21  + b0 × 20  

8. Thus the largest negative number, 100 . . . 00, has the value −2 n-1. The largest positive number, 011 . . . 11, has the value 2 n-1 − 1.

2.2. ADDITION AND SUBTRACTION:

1. To assess the suitability of different number representations, it is necessary to investigate their use in arithmetic operations—particularly in addition and subtraction.

2. We can illustrate the good and bad aspects of each representation by considering very small numbers. We will use four-bit numbers, consisting of a sign bit and three significant bits.

3. Addition of positive numbers is the same for all three number representations. It is actually the same as the addition of unsigned numbers.

4. But there are significant differences when negative numbers are involved. The difficulties that arise become apparent if we consider operands with different combinations of signs.

Sign-and-Magnitude Addition

1. If both operands have the same sign, then the addition of sign-and-magnitude numbers is simple. The magnitudes are added, and the resulting sum is given the sign of the operands.

2. However, if the operands have opposite signs, the task becomes more complicated. Then it is necessary to subtract the smaller number from the larger one.

3. This means that logic circuits that compare and subtract numbers are also needed. We will see shortly that it is possible to perform subtraction without the need for this circuitry. For this reason, the sign-and-magnitude representation is not used in computers.

1’s Complement Addition

1. An obvious advantage of the 1’s complement representation is that a negative number is generated simply by complementing all bits of the corresponding positive number.

2. Below figure shows what happens when two numbers are added. There are four cases to consider in terms of different combinations of signs. As seen in the top half of the figure, the computation of 5 + 2 = 7 and (−5) + 2 = (−3) is straightforward; a simple addition of the operands gives the correct result. Such is not the case with the other two possibilities.

3. Computing 5 + (−2) = 3 produces the bit vector 10010. Because we are dealing with four-bit numbers, there is a carry-out from the sign-bit position. Also, the four bits of the result represent the number 2 rather than 3, which is a wrong result.

4. Interestingly, if we take the carry-out from the sign-bit position and add it to the result in the least-significant bit position, the new result is the correct sum of 3.

Fig. Examples of 1’s complement addition

5. A similar situation arises when adding (−5) + (−2) = (−7). After the initial addition the result is wrong because the four bits of the sum are 0111, which represents +7 rather than −7. But again, there is a carry-out from the sign-bit position, which can be used to correct the result by adding it in the LSB position

6. The conclusion from these examples is that the addition of 1’s complement numbers may or may not be simple. In some cases a correction is needed, which amounts to an extra addition that must be performed.

2’s Complement Addition

1. Consider the same combinations of numbers as used in the 1’s complement example. Below figure indicates how the addition is performed using 2’s complement numbers.

 

Fig. Examples of 2’s complement addition.

2. Adding 5 + 2 = 7 and (−5) + 2 = (−3) is straightforward. The computation 5 + (−2) = 3 generates the correct four bits of the result, namely 0011. There is a carry-out from the sign-bit position, which we can simply ignore.

 3. The fourth case is (−5) + (−2) = (−7). Again, the four bits of the result, 1001, give the correct sum (−7). In this case also, the carry-out from the sign-bit position can be ignored.

4. As illustrated by these examples, the addition of 2’s complement numbers is very simple. When the numbers are added, the result is always correct. If there is a carry-out from the sign-bit position, it is simply ignored.

5. Therefore, the addition process is the same, regardless of the signs of the operands. It can be performed by an adder circuit.

6. Hence the 2’s complement notation is highly suitable for the implementation of addition operations.

2’s Complement Subtraction

1. The easiest way of performing subtraction is to negate the subtrahend and add it to the minuend. This is done by finding the 2’s complement of the subtrahend and then performing the addition. Below figure illustrates the process.

Fig. Examples of 2’s complement subtraction.

        

2. The operation 5 − (+2) = 3 involves finding the 2’s complement of +2, which is 1110. When this number is added to 0101, the result is 0011 = (+3) and a carry-out from the sign-bit position occurs, which is ignored. A similar situation arises for (−5) (+2) = (−7). In the remaining two cases there is no carry-out, and the result is correct.

3. Subtraction operation can be realized as the addition operation, using a 2’s complement of the subtrahend, regardless of the signs of the two operands. Therefore, it should be possible to use the same adder circuit to perform both addition and subtraction.

2.3. COMBINATIONAL CIRCUIT BUILDING BLOCKS:

Combinational Logic Design:

1. Logic circuits for digital systems may be combinational or sequential. The output of a combinational circuit depends on its present inputs only.

2. Combinational circuit processing operation fully specified logically by a set of Boolean functions .A combinational circuit consists of input variables, logic gates and output variables.

3. Both input and output data are represented by signals, i.e., they exists in two possible values. One is logic –1 and the other logic 0.

 


4. For n input variables, there are 2n possible combinations of binary input variables .For each possible input Combination, there is one and only one possible output combination.

5. A combinational circuit can be described by m Boolean functions one for each output variables.

6. Design Procedure:

   The problem is stated

         1.         The number of available input variables and required output variables is determined.

         2.         The input and output variables are assigned letter symbols.

         3.         The truth table that defines the required relationship between inputs and outputs is  derived.

         4.         The simplified Boolean function for each output is obtained.

         5.         The logic diagram is drawn.

2.3.1. Adders and Subtractors:

Adders:

Half Adder:

1. This circuit, which implements the addition of only two bits, is called a half-adder.

2. Consider the addition of 2 one-bit numbers, as an example in the context of general adder circuits.

3. The one-bit addition entails four possible combinations, as indicated in Figure a shown below .Two bits are needed to represent the result of the addition.

4. The right-most bit is called the sum, s. The left-most bit, which is produced as a carry-out when both bits being added are equal to 1, is called the carry, c. The addition operation is defined in the form of a truth table in part (b) of the figure.

Fig. Half-adder.

5. The sum bit s is the XOR function. The carry c is the AND function of inputs x and y. A circuit realization of these functions is shown in Figure c.

6. A more interesting case is when larger numbers that have multiple bits are involved. Then it is still necessary to add each pair of bits, but for each bit position i, the addition operation may include a carry-in from bit position i − 1.Figure 3.2a presents an example of the addition operation.

 

Fig. Addition of multibit numbers.

7. The two operands are X = (01111)2 = (15)10 and Y = (01010)2 = (10)10. Five bits are used to represent X and Y, making it possible to represent integers in the range from 0 to 31; hence the sum S = X + Y = (25)10 can also be denoted as a five-bit integer.

8. Note the labeling of individual bits, such that X = x4x3x2x1x0 and Y = y4y3y2y1y0. The figure shows, in a pink color, the carries generated during the addition process. For example, a carry of 0 is generated when x0 and y0 are added; a carry of 1 is produced when x1 and y1 are added, and so on.

9. For bit position 0, there is no carry-in, and hence the addition is the same as for figure. a of Half adder . For each other bit position i, the addition involves bits xi and yi , and a carry-in ci , as illustrated in above figure b. This observation leads to the design of a logic circuit that has three inputs xi , yi , and ci , and produces the two outputs si and c i+1.That we can achieve by using Full adder circuit.


NAND Logic:

 

 


NOR Logic:

 

Full Adder:

1. This circuit, which implements the addition of three bits, is called a full-adder.

2. Full adder having the 3 inputs and 2 outputs shown in block diagram

Fig. Full adder block diagram

3. Full adder truth table and outputs realization using K-map, logic circuit implementation is shown in below figure.       

Fig. Full adder.

4. For the carry-out function the optimal sum-of-products realization is

                        c i+1 = xi yi + xi ci + yi ci

5. For the si function a sum-of-products realization is

              =   +     +    +       

  Implementing this function is by using the XOR gates, as explained below.

1. The XOR function of two variables is defined as x1 x2 =  + . The preceding expression for the sum bit can be manipulated into a form that uses only XOR operations as follows

           = (  +  )   + (    +       

= (   )  + (   ʘ )  

                                    = (   )  + ( )   (⸫ x ʘ y =   )

                                                    = (xi yi) ci

2. The XOR operation is associative; hence we can write

=       

3. Therefore, a three-input XOR operation can be used to realize

Fig. A decomposed implementation of the full-adder circuit.

4. In view of the names used for the circuits, one can expect that a full-adder can be constructed using half-adders. This can be accomplished by creating a multilevel circuit given in above Figure. It uses two half-adders to form a full-adder.

AOI Logic:

 

NAND Logic:

NOR logic :

Subtractors:

1. The subtraction of two binary numbers may be accomplished by taking the complement of the subtrahend and adding it to the minuend.

2. By this, the subtraction operation becomes an addition operation and instead of having a separate circuit for subtraction, the adder itself can be used to perform subtraction. This results in reduction of hardware.

3. In subtraction, each subtrahend bit of the number is subtracted from its corresponding significant minuend bit to form a difference bit.

4. If the minuend bit is smaller than the subtrahend bit, a 1 is borrowed from the next significant position., that has been borrowed must be conveyed to the next  higher pair of bits by means of a signal coming out (output) of a given stage and going into (input) the next higher stage.

The Half-Subtractor:

1. A Half-subtractor is a combinational circuit that subtracts one bit from the other and produces the difference. It also has an output to specify if a 1 has been borrowed. . It is used to subtract the LSB of the subtrahend from the LSB of the minuend when one binary number is subtracted from the other.

2. A Half-subtractor is a combinational circuit with two inputs A and B and two outputs d and b. d indicates the difference and b is the output signal generated that informs the next stage that a 1 has been borrowed. When a bit B is subtracted from another bit A,  a difference bit (d) and a borrow bit (b) result according to the rules given as

3. The output borrow b is a 0 as long as A≥B. It is a 1 for A=0 and B=1. The d output is the result of the arithmetic operation 2b+A-B.

4. A circuit that produces the correct difference and borrow bits in response to every possible combination of the two 1-bit numbers is , therefore ,

                        d=  B+𝐴  = AB and b=  B

5. That is, the difference bit is obtained by X-OR ing the two inputs, and the borrow bit is obtained by ANDing the complement of the minuend with the subtrahend.Note that logic for this exactly the same as the logic for output S in the half-adder.

6. A half-substractor can also be realized using universal logic either using only NAND gates or using NOR gates as:

NAND Logic:

NOR Logic:

The Full-Subtractor:

1. The half-subtractor can be only for LSB subtraction. If there is a  borrow during the subtraction of the LSBs, it affects the subtraction in the next higher column; the subtrahend bit is subtracted from the minuend bit, considering the borrow from that column used for the subtraction in the preceding column.

2. Such a subtraction is performed by a full-subtractor. It subtracts one bit (B) from another bit (A) , when already there is a borrow bi from this column for the subtraction in the preceding column, and outputs the difference bit (d) and the borrow bit(b) required from the next d and b.

3. The two outputs present the difference and output borrow. The 1s and 0s for the output variables are determined from the subtraction of A-B-bi.

4. From the truth table, a circuit that will produce the correct difference and borrow bits in response to every possiblecombinations of A,B and bi is

5. A full-subtractor can be realized using X-OR gates and AOI gates as

6. The full subtractor can also be realized using universal logic either using only NAND gates or using NOR gates as:

NAND Logic:

NOR Logic:


Binary Parallel Adder:

1. A binary parallel adder is a digital circuit that adds two binary numbers in parallel form and produces the arithmetic sum of those numbers in parallel form. It consists of full adders connected in a chain , with the output carry from each full-adder connected to the input carry of the next full-adder in the chain.

1.The interconnection of four full-adder (FA) circuits to provide a 4-bit parallel adder. The augends bits of A and addend bits of B are designated by subscript numbers from right to left, with subscript 1 denoting the lower –order bit.

2. The carries are connected in a chain through the full-adders. The input carry to the adder is Cin and the output carry is C4. The S output generates the required sum bits. When the 4-bit full-adder circuit is enclosed within an IC package, it has four terminals for the augends bits, four terminals for the addend bits, four terminals for the sum bits, and two terminals for the input and output carries.

3. An n-bit parallel adder requires n-full adders. It can be constructed from 4-bit, 2-bit and 1-bit full adder ICs by cascading several packages. The output carry from one package must be connected to the input carry of the one with the next higher –order bits. The 4-bit full adder is a typical example of an MSI function.

Ripple-Carry Adder:

1. To perform addition by hand, we start from the least-significant digit and add pairs of digits, progressing to the most-significant digit. If a carry is produced in position i, then this carry is added to the operands in position i + 1. The same arrangement can be used in a logic circuit that performs addition. For each bit position we can use a full-adder circuit, connected as shown in below figure

Fig. An n-bit ripple-carry adder.

2. Note that to be consistent with the customary way of writing numbers, the least-significant bit position is on the right. Carries that are produced by the full-adders propagate to the left.

3. When the operands X and Y are applied as inputs to the adder, it takes some time before the output sum, S, is valid. Each full-adder introduces a certain delay before its .   and .  outputs are valid. Let this delay be denoted as ∆t.   

4. Thus the carry-out from the first stage, , arrives at the second stage ∆t after the application of the  and  inputs. The carry-out from the second stage, , arrives at the third stage with a 2∆t delay, and so on.

5. The signal  is valid after a delay of (n − 1)t, which means that the complete sum is available after a delay of n∆t.

6. Because of the way the carry signals “ripple” through the full-adder stages, the circuit in above figure is called a ripple-carry adder.

7. The delay incurred to produce the final sum and carry-out in a ripple-carry adder depends on the size of the numbers.

 4- Bit Parallel Subtractor:

1. The subtraction of binary numbers can be carried out most conveniently by means of complements , the subtraction A-B can be done by taking the 2‘s complement of B and adding it to A .

2. The 2‘s complement can be obtained by taking the 1‘s complement and adding 1 to the least significant pair of bits. The 1‘s complement can be implemented with inverters as

Binary-Adder Subtractor:

1. A 4-bit adder-subtractor, the addition and subtraction operations are combined into one circuit with one common binary adder. This is done by including an X-OR gate with each full-adder. The mode input M controls the operation. When M=0, the circuit is an adder, and when M=1, the circuit becomes a subtractor.

2. Each X-OR gate receives input M and one of the inputs of B. When M=0, B0 = B .The full-adder receives the value B, input carry is 0 and the circuit performs A+B.  When M=1, B1 =  and C1=1. The B inputs are complemented and a 1 is through the input carry. The circuit performs the operation A plus the 2‘s complement of B.


 

The Look-Ahead –Carry Adder:

1. In parallel-adder, the speed with which an addition can be performed is governed by the time required for the carries to propagate or ripple through all of the stages of the adder.

2. The look-ahead carry adder speeds up the process by eliminating this ripple carry delay. It examines all the input bits simultaneously and also generates the carry-in bits for all the stages simultaneously.

3. The method of speeding up the addition process is based on the two additional functions of the full-adder, called the carry generate and carry propagate functions.

4. Consider one full adder stage; say the nth stage of a parallel adder as shown in fig. we know that is made by two half adders and that the half adder contains an X-OR gate to produce the sum and an AND gate to produce the carry.

 5. If both the bits An and Bn are 1s, a carry has to be generated in this stage regardless of whether the input carry Cin is a 0 or a 1. This is called generated carry, expressed as Gn= An.Bn which has to appear at the output through the OR gate as shown in fig.

6. There is another possibility of producing a carry out. X-OR gate inside the half-adder  at the  input  produces an intermediary sum bit-  call it  Pn  –which is  expressed as Pn = AnBn . Next Pn and Cn are added using the X-OR gate inside the second half adder to produce the final sum bit  and Sn = PnCn   where  Pn = AnBn  and  output  carry C0= Pn.Cn =  (AnBn )Cn which becomes carry for the (n+1)th stage.

7. Consider the case of both Pn and Cn being 1. The input carry Cn has to be propagated to the output only if Pn is 1. If Pn is 0, even if Cn is 1, the and gate in the second half-adder will inhibit Cn . the carry out of the nth stage is 1 when either Gn=1 or Pn.Cn =1 or both Gn and Pn.Cn are equal to 1. For the final sum and carry outputs of the nth stage, we get the following Boolean expressions.

8. Observe the recursive nature of the expression for the output carry at the nth stage which becomes the input carry for the (n+1)st stage .it is possible to express the output carry of a higher significant stage is the carry-out of the previous stage.

9. Based on these , the expression for the carry-outs of various full adders are as follows,

10. Observe that the final output carry is expressed as a function of the input variables in SOP form. Which is two level AND-OR or equivalent NAND-NAND form. Observe that the full look-ahead scheme requires the use of OR gate with (n+1) inputs and AND gates with number of inputs varying from 2 to (n+1).

2’s complement Addition and Subtraction using Parallel Adders:

1. Most modern computers use the 2‘s complement system to represent negative numbers and to perform subtraction operations of signed numbers can be performed using only the addition operation ,if we use the 2‘s complement form to represent negative numbers.

2. The circuit shown can perform both addition and subtraction in the 2‘s complement. This adder/subtractor circuit is controlled by the control signal ADD/SUB‘.

3. When the ADD/SUB‘level is HIGH, the circuit performs the addition of the numbers stored in registers A and B.  When the ADD/Sub‘level is LOW, the circuit subtract the number in register B from the number in register A. The operation is

When ADD/SUB‘is a 1:

1. AND gates 1,3,5 and 7 are enabled , allowing B0,B1,B2and B3 to pass to the OR gates 9,10,11,12 . AND gates 2,4,6 and 8 are disabled , blocking B0‘,B1‘,B2‘, and B3‘ from reaching the OR gates 9,10,11 and 12.

2. The two levels B0 to B3 pass through the OR gates to the 4-bit parallel adder, to be added to the bits A0 to A3. The sum appears at the output S0 to S3

3. Add/SUB‘=1 causes no carry into the adder.

When ADD/SUB‘is a 0:

1. AND gates 1,3,5 and 7 are disabled , allowing B0,B1,B2and B3 from reaching the OR gates 9,10,11,12 . AND gates 2,4,6 and 8 are enabled , blocking B0‘,B1‘,B2‘, and  B3‘ from reaching the OR gates.

 2.The two levels B0‘ to B3‘ pass through the OR gates to the 4-bit parallel adder, to be added to the bits A0 to A3.The C0 is now 1.thus the number in register B is converted to  its 2‘s complement form.

3. The difference appears at the output S0 to S3.

4. Adders/Subtractors used for adding and subtracting signed binary numbers. In computers , the output is transferred into the register A (accumulator) so that the result of the addition or subtraction always end up stored in the register A This is accomplished by applying a transfer pulse to the CLK inputs of register A.

 

Serial Adder:

1. A serial adder is used to add binary numbers in serial form. The two binary numbers to be added serially are stored in two shift registers A and B. Bits are added one pair at a time through a single full adder (FA) circuit as shown.

2. The carry out of the full-adder is transferred to a D flip- flop. The output of this flip-flop is then used as the carry input for the next pair of significant bits. The sum bit from the S output of the full-adder could be transferred to a third shift register.

3. By shifting the sum into A while the bits of A are shifted out, it is possible to use one register for storing both augend and the sum bits. The serial input register B can be used to transfer a new binary number while the addend bits are shifted out during the addition.

The operation of the serial adder is:

1. Initially register A holds the augend, register B holds the addend and the carry flip-flop is cleared to 0. The outputs (SO) of A and B provide a pair of significant bits for the full-adder at x and y.

2. The shift control enables both registers and carry flip-flop , so, at the clock pulse both registers are shifted once to the right, the sum bit from S enters the left most flip-flop of A , and the output carry is transferred into flip-flop Q .

3. The shift control enables the registers for a number of clock pulses equal to the number of bits of the registers. For each succeeding clock pulse a new sum bit is transferred to A, a new carry is transferred to Q, and both registers are shifted once to the right.

4. This process continues until the shift control is disabled. Thus the addition is accomplished by passing each pair of bits together with the previous carry through a single full adder circuit and transferring the sum, one bit at a time, into register A.

5. Initially, register A and the carry flip-flop are cleared to 0 and then the first number is added from B. While B is shifted through the full adder, a second number is transferred to it through its serial input.

6. The second number is then added to the content of register A while a third number is transferred serially into register B. This can be repeated to form the addition of two, three, or more numbers and accumulate their sum in register A.

Difference between Serial and Parallel Adders:

1. The parallel adder registers with parallel load, whereas the serial adder uses shift registers.

2. The number of full adder circuits in the parallel adder is equal to the number of bits in the binary numbers, whereas the serial adder requires only one full adder circuit and a carry flip- flop.

3. Excluding the registers, the parallel adder is a combinational circuit, whereas the serial adder is a sequential circuit. The sequential circuit in the serial adder consists of a full-adder and a flip-flop that stores the output carry.

BCD Adder:

The BCD addition process:

1. Add the 4-bit BCD code groups for each decimal digit position using ordinary binary addition.

2. For those positions where the sum is 9 or less, the sum is in proper BCD form and no correction is needed.

3. When the sum of two digits is greater than 9, a correction of 0110 should be added to that sum, to produce the proper BCD result. This will produce a carry to be added to the next decimal position.

A BCD adder circuit must be able to operate in accordance with the above steps.

In other words, the circuit must be able to do the following:

1. Add two 4-bit BCD code groups, using straight binary addition.

 2. Determine, if the sum of this addition is greater than 1101 (decimal 9); if it is , add 0110 (decimal 6) to this sum and generate a carry to the next decimal position.

1. The first requirement is easily met by using a 4- bit binary parallel adder such as the 74LS83 IC .For example , if the two BCD code groups A3A2A1A0and B3B2B1B0 are applied to a 4-bit parallel adder, the adder will output S4S3S2S1S0 , where S4 is actually C4 , the carry –out of the MSB bits.

2. The sum outputs S4S3S2S1S0 can range anywhere from 00000 to 10010(when both the BCD code groups are 1001= 9). The circuitry for a BCD adder must include the logic needed to detect whenever the sum is greater than 01001, so that the correction can be added in. Those cases, where the sum is greater than 1001 are listed as:

3. Let us define a logic output X that will go HIGH only when the sum is greater than 01001 (i.e, for the cases in table). If examine these cases ,see that X will be HIGH for either of the following conditions:

a)      Whenever S4 =1(sum greater than 15)

b)      Whenever S3 =1 and either S2 or S1 or both are 1 (sum 10 to 15) This condition can be expressed as

            X=S4+S3(S2+S1)

4. Whenever X=1, it is necessary to add the correction factor 0110 to the sum bits, and to generate a carry. The circuit consists of three basic parts.

5. The two BCD code groups A3A2A1A0 and B3B2B1B0 are added together in the upper 4-bit adder, to produce the sum S4S3S2S1S0. The logic gates shown implement the expression for X.

6. The lower 4-bit adder will add the correction 0110 to the sum bits, only when X=1, producing the final BCD sum output represented by

7. ∑3210. The X is also the carry-out that is produced when the sum is greater than 01001. When X=0, there is no carry and no addition of 0110. In such cases, ∑3210= S3S2S1S0.

8. Two or more BCD adders can be connected in cascade when two or more digit decimal numbers are to be added. The carry-out of the first BCD adder is connected as the carry-in of the second BCD adder, the carry-out of the second BCD adder is connected as the carry-in of the third BCD adder and so on.

 

EXCESS-3(XS-3) ADDER:

To perform Excess-3 additions,

1.        Add two xs-3 code groups

2.        If carry=1, add 0011(3) to the sum of those two code groups

If carry =0, subtract 0011(3) i.e., add 1101 (13 in decimal) to the sum of those two code groups.

Ex: Add 9 and 5         

                                    1100    9 in Xs-3

                                  +1000    5 in xs-3

                                    ___ _ _ __

                        1          0100    there is a carry

                +0011          0011    add 3 to each group

              ----------          ----------

                 0100           0111       14 in xs-3

                  (1)             (4)

EX:

1. Implementation of xs-3 adder using 4-bit binary adders is shown. The augend (A3 A2A1A0) and addend (B3B2B1B0) in xs-3 are added using the 4-bit parallel adder. If the carry is a 1, then 0011(3) is added to the sum bits S3S2S1S0 of the upper adder in the lower 4-bit parallel adder.

2. If the carry is a 0, then 1101(3) is added to the sum bits (This is equivalent to subtracting 0011(3) from the sum bits. The correct sum in xs-3 is obtained.

Excess-3 (XS-3) Subtractor:

To perform Excess-3 subtraction,

1.                 Complement the subtrahend

2.                 Add the complemented subtrahend to the minuend.

3.                 If carry =1, result is positive. Add 3 and end around carry to the result . If carry=0, the result is negative. Subtract 3, i.e, and take the 1‘s complement of the result.

Ex:    Perform 9-4

                        1100    9 in xs-3

                       +1000   Complement of 4 n Xs-3

                        --------

            (1)        0100    There is a carry

 

                        +0011  Add 0011(3)

                     ------------

                          0111

                                1  End around carry

                      ------------

                          1000  5 in xs-3

1. The minuend and the 1‘s complement of the subtrahend in xs-3 are added in the upper 4- bit parallel adder.

2. If the carry-out from the upper adder is a 0, then 1101 is added to the sum bits of the upper adder in the lower adder and the sum bits of the lower adder are complemented to get the result. If the carry-out from the upper adder is a 1, then 3=0011 is added to the sum bits of the lower adder and the sum bits of the lower adder give the result.

2.3.2. Multiplexers (Data Selector):

1. A multiplexer circuit has a number of data inputs, one or more select inputs, and one output. It passes the signal value on one of the data inputs to the output. The data input is selected by the values of the select inputs.

Fig. Multiplexer block diagram

2. Below figure shows a 2-to-1 multiplexer. Part (a) gives the symbol commonly used. The select input, s, chooses as the output of the multiplexer either input w0 or w1. The multiplexer’s functionality can be described in the form of a truth table as shown in part (b) of the figure. Part (c) gives a sum-of-products implementation of the 2-to-1 multiplexer.

Fig. A 2-to-1 multiplexer.

3. Below figure a depicts a larger multiplexer with four data inputs, w0, . . . ,w3, and two select inputs, s1 and s0. As shown in the truth table in part (b) of the figure, the two-bit number represented by s1s0 selects one of the data inputs as the output of the multiplexer

Fig. A 4-to-1 multiplexer.

4. A sum-of-products implementation of the 4-to-1 multiplexer appears in Figure 4.2c. It

     realizes the multiplexer function

            f = s1s0w0 + s1s0w1 + s1s0w2 + s1s0w3

5. It is possible to build larger multiplexers using the same approach. Usually, the number of data inputs, n, is an integer power of two.

6. A multiplexer that has n data inputs, w0, . . . ,wn−1, requires [log2n] select inputs. Larger multiplexers can also be constructed from smaller multiplexers.

For example, the 4-to-1 multiplexer can be built using three 2-to-1 multiplexers as illustrated in below figure.

Fig. Using 2-to-1 multiplexers to build a 4-to-1 multiplexer.

Below figure  shows how a 16-to-1 multiplexer is constructed with five 4-to-1 multiplexers.

Fig. A 16-to-1 multiplexer.

Synthesis of Logic Functions Using Multiplexers:

1. Multiplexers are useful in many practical applications. They can also be used in a more general way to synthesize logic functions. Consider the example in below figure a.

2. The truth table defines the function f = w1 w2. This function can be implemented by a 4-to-1 multiplexer in which the values of f in each row of the truth table are connected as constants to the multiplexer data inputs.

3. The multiplexer select inputs are driven by w1 and w2. Thus for each valuation of w1w2, the output f is equal to the function value in the corresponding row of the truth table

Fig . Synthesis of a logic function using multiplexers.

4. The above implementation is straightforward, but it is not very efficient. A better implementation can be derived by manipulating the truth table as indicated in above figure b, which allows f to be implemented by a single 2-to-1 multiplexer.

5. One of the input signals, w1 in this example, is chosen as the select input of the 2-to-1 multiplexer. The truth table is redrawn to indicate the value of f for each value of w1.

6. When w1 = 0, f has the same value as input w2, and when w1 = 1, f has the value of w2. The circuit that implements this truth table is given in above figure c. This procedure can be applied to synthesize a circuit that implements any logic function.

7. Below figure a gives the truth table for the three-input majority function, and it shows how the truth table can be modified to implement the function using a 4-to-1 multiplexer. Any two of the three inputs may be chosen as the multiplexer select inputs. We have chosen w1 and w2 for this purpose, resulting in the circuit in Figure b.

Fig. Implementation of the three-input majority function

using a 4-to-1 multiplexer.

2.3.3. Decoders:

2 to 4 Decoder:

1. Consider the logic circuit in below figure. It has two inputs, w1 and w0, and four outputs, y0, y1, y2, and y3. As shown in the truth table, only one of the outputs is asserted at a time, and each output corresponds to one valuation of the inputs. Setting the inputs w1w0 to 00, 01, 10, or 11 causes the output y0, y1, y2, or y3 to be set to 1, respectively.

2. This type of circuit is called a binary decoder. Its inputs represent a binary number, which is decoded to assert the corresponding output. A circuit symbol and logic circuit for this decoder are shown in parts (b) and (c) of the figure. Each output is driven by an AND gate that decodes the corresponding valuation of w1w0.

Fig. 2 to 4 decoder truth table and block diagram

Fig. A 2-to-4 decoder Logic circuit

3. It is useful to include an enable input, En, in a decoder circuit, as illustrated in below figure. When enabled by setting En = 1 the decoder behaves as presented in above figure.

Fig. 2 to 4 decoder with enable input

4. But, if it is disabled by setting En = 0, then none of the outputs are asserted. Note that only five rows are shown in the truth table, because if En = 0 then all outputs are equal to 0 regardless of the values of w1 and w0.

5. The truth table indicates this by showing x when it does not matter whether the variable in question has the value 0 or 1.

6.  A graphical symbol for this decoder is given in Figure b. Part (c) of the figure shows how the enable capability can be included in the decoder of Figure c.

7. A binary decoder with n inputs has 2n outputs. A graphical symbol for an n-to-2n decoder is shown in Figure d.

Fig. Binary decoder.

Fig. A 3-to-8 decoder using two 2-to-4 decoders.

3-to-8 decoder:

         Number of inputs = 3, Number of outputs = 8

Fig. Binary to octal decoder(3-to-8 decoder)

Fig. Truth table

 

Fig. Logic circuit

2.3.4.Demultiplexers:

1. A multiplexer has one output, n data inputs, and [log2n] select inputs. The purpose of the multiplexer circuit is to multiplex the n data inputs onto the single data output under control of the select inputs.

2. A circuit that performs the opposite function, namely, placing the value of a single data input onto multiple data outputs, is called a demultiplexer.

Fig. De multiplexer block diagram

1- to- 4 Demultiplexer:

No. of Inputs = 1, No.of.outputs= 4, No. of select inputs = 2

Fig.Truth table and block diagram

Fig. Logic circuit

2.3.5. Encoders:

1. An encoder performs the opposite function of a decoder. It encodes given information into a more compact form.

2. A binary encoder encodes information from 2n inputs into an n-bit code, as indicated in Figure. Exactly one of the input signals should have a value of 1, and the outputs present the binary number that identifies which input is equal to 1.

Fig. A 2n-to-n binary encoder.

4-to-2 encoder:

No. of inputs = 4, No.of outputs = 2

From truth table observe that the output y0 is 1 when either input w1 or w3 is 1, and output y1 is 1 when input w2 or w3 is 1. Hence these outputs can be generated by the circuit in Figure b.

8-to-3 encoder:

Fig. Truth table and block diagram

Fig. Logic diagram

Decimal to BCD encoder:

Fig. Truth table and block diagram

Fig. Logic diagram

Parity Encoders:

1. Another useful class of encoders is based on the priority of input signals. In a priority encoder each input has a priority level associated with it. The encoder outputs indicate the active input that has the highest priority.

2. When an input with a high priority is asserted, the other inputs with lower priority are ignored. The truth table for a 4-to-2 priority encoder is shown in Figure.

Fig .Truth table for a 4-to-2 priority encoder.

3. It assumes that w0 has the lowest priority and w3 the highest. The outputs y1 and y0 represent the binary number that identifies the highest priority input set to 1. Since it is possible that none of the inputs is equal to 1, an output, z, is provided to indicate this condition.

4. It is set to 1 when at least one of the inputs is equal to 1. It is set to 0 when all inputs are equal to 0. The outputs y1 and y0 are not meaningful in this case, and hence the first row of the truth table can be treated as a don’t-care condition for y1 and y0.

5. The behavior of the priority encoder is most easily understood by first considering the last row in the truth table. It specifies that if input w3 is 1, then the outputs are set to y1y0 = 11.

6. Because w3 has the highest priority level, the values of inputs w2, w1, and w0 do not matter. To reflect the fact that their values are irrelevant, w2, w1, and w0 are denoted by the symbol x in the truth table.

7. The second-last row in the truth table stipulates that if w2 = 1, then the outputs are set to y1y0 = 10, but only if w3 = 0. Similarly, input w1 causes the outputs to be set to y1y0 = 01 only if both w3 and w2 are 0. Input w0 produces the outputs y1y0 = 00 only if w0 is the only input that is asserted.

2.3.6. Code converters:

1. The availability of a large variety of codes for the same discrete elements of information results in the use of different codes by different digital systems. It is sometimes necessary to use the output of one system as the input to another.

2. A conversion circuit must be inserted between the two systems if each uses different codes for the same information. Thus a code converter is a logic circuit whose inputs are bit patterns representing numbers (or character) in one cod and whose outputs are the corresponding representation in a different code. Code converters are usually multiple output circuits.

3. To convert from binary code A to binary code B, the input lines must supply the bit combination of elements as specified by code A and the output lines must generate the corresponding bit combination of code B. A combinational circuit performs this transformation by means of logic gates.

4. For example, a binary –to-gray code converter has four binary input lines B4, B3,B2,B1 and four gray code output lines G4,G3,G2,G1. When the input is 0010, for instance, the output should be 0011 and so forth. To design a code converter, we use a code table treating it as a truth table to express each output as a Boolean algebraic function of all the inputs.

5. In this example, of binary –to-gray code conversion, we can treat the binary to the gray code table as four truth tables to derive expressions for G4, G3, G2, and G1. Each of these four expressions would, in general, contain all the four input variables B4, B3,B2,and B1. Thus, this code converter is actually equivalent to four logic circuits, one for each of the truth tables.

6. The logic expression derived for the code converter can be simplified using the usual techniques, including ‗don‘t cares‘ if present. Even if the input is an un weighted code, the same cell numbering method which we used earlier can be used, but the cell numbers --must correspond to the input combinations as if they were an 8-4-2-1 weighted code. s

Design of a 4-bit binary to gray code converter:

Design of a 4-bit gray to Binary code converter:

Design of a 4-bit BCD to XS-3 code converter:

Design of a BCD to gray code converter:

 

 

 

2.3.7. Comparators:


1- bit  Magnitude Comparator:


2- bit Magnitude Comparator:

 

 

4- Bit Magnitude Comparator:

 

 

IC comparator:

 

 

2.3.8. Parity generator and Parity Checker:

1. The parity generator and parity checker’s main function is to detect errors in data transmission .The parity bit is an extra bit that is set at the transmission side to either ‘0’ or ‘1’, it is used to detect only single bit error and it is the easiest method for detecting errors.

2. There are different types of error detection codes used to detect the errors they are parity, ring counter, block parity code, Hamming code etc. The brief explanation about parity bit, parity generator and checker are explained below.

3. What is Parity Bit?

    Definition: The parity bit or check bit are the bits added to the binary code to check whether the particular code is in parity or not, for example, whether the code is in even parity or odd parity is checked by this check bit or parity bit. The parity is nothing but number of 1’s and there are two types of parity bits they are even bit and odd bit.

4. In odd parity bit, the code must be in an odd number of 1’s, for example, we are taking 5-bit code 100011, this code is said to be odd parity because there is three number of 1’s in the code which we have taken. In even parity bit the code must be in even number of 1’s, for example, we are taking 6-bit code 101101, this code is said to be even parity because there are four number of 1’s in the code which we have taken

5. What is the Parity Generator?

    Definition: The parity generator is a combination circuit at the transmitter, it takes an original message as input and generates the parity bit for that message and the transmitter in this generator transmits messages along with its parity bit.

6. Types of Parity Generator

   The classification of this generator is shown in the below figure

Even Parity Generator:

1. The even parity generator maintains the binary data in even number of 1’s, for example, the data taken is in odd number of 1’s, this even parity generator is going to maintain the data as even number of 1’s by adding the extra 1 to the odd number of 1’s.

2. This is also a combinational circuit whose output is dependent upon the given input data, which means the input data is binary data or binary code given for parity generator.

3. Let us consider three input binary data, that three bits are considered as A, B, and C. We can write 8 combinations using the three input binary data that is from 000 to 111 (0 to 7), total eight combinations will get from the given three input binary data which we have considered. The truth table of even parity generator for three input binary data is shown below.

     0 0 0 – In this input binary code the even parity is taken as ‘0’ because the input is already in even parity, so no need to add even parity once again for this input.

     0 0 1 – – In this input binary code there is only a single number of ‘1’ and that single number of ‘1’ is an odd number of ‘1’. If an odd number of ‘1’ is there, then even parity generator must generate another ‘1’ to make it as even parity, so even parity is taken as 1 to make the 0 0 1 code into even parity.

     0 1 0 –  This bit is in odd parity so even parity is taken as 1 to make the 0 1 0 code into even parity.

     0 1 1 – This bit is already in even parity so even parity is taken as 0 to make the 0 1 1 code into even parity.

      1 0 0 – This bit is in odd parity so even parity is taken as 1 to make the 1 0 0 code into even parity.

      1 0 1 – This bit is already in even parity so even parity is taken as 0 to make the 1 0 1 code into even parity.

      1 1 0 – This bit is also in even parity so even parity is taken as 0 to make the 1 1 0 code into even parity.

      1 1 1 – This bit is in odd parity so even parity is taken as 1 to make the 1 1 1 code into even parity.                         Even Parity Generator Truth Table

A B C

Even Parity

0  0 0

0

0  0 1

1

0  1 0

1

0  1 1

0

1  0 0

1

1  0 1

0

1  1 0

0

1  1 1

1

 

4. The karnaugh map (k-map) simplification for three-bit input even parity is

k-map-for-even-parity-generator

5. From the above even parity truth table, the parity bit simplified expression is written as

6. The even parity expression implemented by using two Ex-OR gates and the logic diagram of  this even parity using the Ex-OR logic gate is shown below.

Even-parity-logic-circuit

7. In this way, the even parity generator generates an even number of 1’s by taking the input data

Odd Parity Generator

1. The odd parity generator maintains the binary data in an odd number of 1’s, for example, the data taken is in even number of 1’s, this odd parity generator is going to maintain the data as an odd number of 1’s by adding the extra 1 to the even number of 1’s.

2. This is the combinational circuit whose output is always dependent upon the given input data.  If there is an even number of 1’s then only parity bit is added to make the binary code into an odd number of 1’s.

3. Let us consider three input binary data, that three bits are considered as A, B, and C. The truth table of odd parity generator for three input binary data is shown below.

     0 0 0 – In this input binary code the odd parity is taken as ‘1’ because the input is in even parity.

     0 0 1 –  This binary input is already in odd parity, so odd parity is taken as 0.

     0 1 0 – This binary input is also in odd parity, so odd parity is taken as 0.

     0 1 1 – This bit is in even parity so odd parity is taken as 1 to make the 0 1 1 code into odd parity.

     1 0 0 – This bit is already in odd parity, so odd parity is taken as 0 to make the 1 0 0 code into odd parity.

     1 0 1 – This input bit is in even parity, so odd parity is taken as 1 to make the 1 0 1 code into odd parity.

     1 1 0 – This bit is in even parity, so odd parity is taken as 1.

     1 1 1 – This input bit is in odd parity, so odd parity is taken as 0.

Odd Parity Generator Truth Table

 

A B C

Odd Parity

0  0 0

1

0  0 1

0

0  1 0

0

0  1 1

1

1  0 0

0

1  0 1

1

1  1 0

1

1  1 1

0

 

k-map-for-odd-parity-generator

From the above odd parity truth table, the parity bit simplified expression is written as

The logic diagram of this odd parity generator is shown below.

Logic-circuit

In this way, the odd parity generator generates an odd number of 1’s by taking the input data

 What is the Parity Check?

  Definition: The combinational circuit at the receiver is the parity checker. This checker takes the received message including the parity bit as input. It gives output ‘1’ if there is some error found and gives output ‘0’ if no error is found in the message including the parity bit.

Types of Parity Checker

The classification of the parity checker is shown in the below figure

Even Parity Checker:

1. In even parity checker if the error bit (E) is equal to ‘1’, then we have an error. If error bit E=0 then indicates there is no error.

            Error Bit (E) =1, error occurs

            Error Bit (E) =0, no error

2. The parity checker circuit is shown in the below figure

Logic circuit

Odd Parity Checker:

1. In odd parity checker if an error bit (E) is equal to ‘1’, then it indicates there is no error. If an error bit E=0 then indicates there is an error.

            Error Bit (E) =1, no error

            Error Bit (E) =0, error occurs

2. The parity checker won’t be able to detect if there are errors in more than ‘1’ bit and the correct of data is also not possible, these are the main disadvantages of the parity checker.

3. Advantages of Parity

a)      Simplicity

b)      Easy to use

4. Applications of Parity

a)      In digital systems and many hardware applications, this parity is used

b)      The parity bit is also used in Small Computer System Interface (SCSI) and also in Peripheral Component Interconnect (PCI) to detect the errors

         1). What is the difference between the parity generator and parity checker?

         The parity generator generates the parity bit in the transmitter and the parity checker checks the parity bit in the receiver.

         2). What does no parity mean?

         When the parity bits are not used to check for errors then the parity bit is said to be non-parity or no parity or the absence of parity.

         3). What is the parity value?     

         The parity value concept used for both commodities and securities and the term refers to when the value of the two assets is equal.

         4). Why do we need a parity checker?

         The parity checker is needed to detect the errors in communication and also in the memory storage devices parity checker is used for testing.         

         5). How can the parity bit detect a damaged data unit?

         The redundant bit in this technique is called a parity bit, it detects damaged data unit when an error occurs during the transmission of data.

2.3.9. BCD to 7 segment converter (decoder):

 

Introduction:

 

1. A digital or binary decoder is a digital combinational logic circuit which can convert one form of digital code into another form.

 

2. BCD to 7-segment display decoder is a special decoder which can convert binary coded decimals into another form which can be easily displayed through a 7-segment display.

 

3. BCD:

     BCD stands for binary coded decimal. It is a digital numbering system in which we can represent each decimal number using 4 bits of binary numbers.

 

4. There are 10 digits in the decimal system. To represent all 10 digits we need 10 combinations of 4 binary bits.

 

5. A digital system like a computer can understand and easily read a large number in binary format. However, a human cannot read large binary numbers. To solve this problem we need to display it as a decimal digit using 7-segment display.

 

6. 7-Segment Display :

     It is a digital device that can be used for displaying decimal number, alphabets, and characters.

7. 7-Segment display contains 7 LED segments arranged in a shape given in figure above. Generally, there are 8 input pins in a 7-Segment display. 7 input pins for each of the 7 LEDs and one pin for the common terminal.

 

8. Here are two types of 7-Segment displays.

 

a)      Common Cathode

In such type of 7-segment display, all the cathodes of the 7 LEDs are connected together to form a common terminal. It should be connected to GND or logic ‘0’ during its operation.

 

To illuminate any LED of the display, you need to supply logic ‘1’ to its corresponding input pin.

 

b)      Common Anode

The type of 7-Segment display in which all the anode terminals of 7 LEDs are connected together to form common anode terminal. This terminal should be connected with Vcc or logic ‘1’ during its operation.

 

            To illuminate any of the LED segments we need to provide logic ‘0’ to it.

 

Working of 7-Segment Display (LED) Circuit :

 

1. 7 LED segments of the display and their pins are “a”, “b”, “c”, “d”, “e”, “f” & “g” as shown in the figure given below. Each of the pins will illuminate the specific segment only.

 

2. We assume common cathode LED segment as our example.

 

3. Suppose we want to display digit ‘0’, in order to display 0, we need to turn on “a”, “b”, “c”, “d”, “e”, “f”. & turn-off the “g”.

 

4. 7-Segment Display Segments for all Numbers Display combination of decimal numbers is given below.

 

Fig. Truth table 7-segment decoder

5.  Karnaugh Maps Simplification

     For other combinations of input, the output is “don’t care X” as there are no more digits to display. We will derive the expression for each output using Karnaugh map (K-MAP).

     For output a:

For output b:

For output c:

 

For output d:

 

 

For output e:

 

 

For output f:

 

For output g:

 

7-Segment Display Decoder Circuit :

     We have derived an expression for each output now we need to make its schematic using logic gates as shown in the figure given below. Fig: Schematic of BCD to 7-Segment Display Decoder.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DE course file

https://docs.google.com/document/d/1I-RjqC1MTDpYtya7NiLCF77A8dI2moQ6/edit?usp=drive_link&ouid=107651441149979712601&rtpof=true&s...