UINT – 1: DESIGN CONCEPTS
1.1. DIGITAL HARDWARE:
1. Logic circuits are used to build computer hardware, as well as many other types of products. All such products are broadly classified as digital hardware.
2. The reason that the name digital is used, it derives from the way in which information is represented in computers, as electronic signals that correspond to digits of information.
3. The technology used to build digital hardware has evolved dramatically over the past few decades.
4. Until the 1960s logic circuits were constructed with bulky components, such as transistors and resistors that came as individual parts.
5. The advent of integrated circuits made it possible to place a number of transistors, and thus an entire circuit, on a single chip.
6. In the beginning these circuits had only a few transistors, but as the technology improved they became more complex.
7. Integrated circuit chips are manufactured on a silicon wafer, such as the one shown in below figure.
8. The wafer is cut to produce the individual chips, which are then placed inside a special type of chip package.
9. By 1970 it was possible to implement all circuitry needed to realize a microprocessor on a single chip.
10. Although early microprocessors had modest computing capability by today’s standards, they opened the door for the information processing revolution by providing the means for implementation of affordable personal computers.
Fig. A silicon wafer
11. About 30 years ago Gordon Moore, chairman of Intel Corporation, observed that integrated circuit technology was progressing at an astounding rate, approximately doubling the number of transistors that could be placed on a chip every two years. This phenomenon, informally known as Moore’s law, continues to the present day.
12. Presently, chips can be manufactured containing billions of transistors.
13. International Technology Roadmap for Semiconductors (ITRS) produces a forecast of how the technology is expected to evolve this forecast discusses many aspects of technology, including the maximum number of transistors that can be manufactured on a single chip. A sample of data from the ITRS is given in below figure.
Fig. An estimate of the maximum number of transistors per chip over time.
14. The roadmap predicts that chips with as many as 100 billion transistors will be possible by the year 2022.
15. The designer of digital hardware may be faced with designing logic circuits that can be implemented on a single chip or designing circuits that involve a number of chips placed on a printed circuit board (PCB).
16. There exists a large variety of chips that implement various functions that are useful in the design of digital hardware. The chips range from simple ones with low functionality to extremely complex chips.
17. For example, a digital hardware product may require a microprocessor to perform some arithmetic operations, memory chips to provide storage capability, and interface chips that allow easy connection to input and output devices. Such chips are available from various vendors.
18. For many digital hardware products, it is also necessary to design and build some logic circuits from scratch.
For implementing these circuits, three main types of chips may be used:
1. Standard chips
2. Programmable logic devices
3. Custom chips
1.1.1 Standard chips:
Each standard chip contains a small amount of circuitry (usually involving fewer than 100 transistors) and performs a simple function.
To build a logic circuit, the designer chooses the chips that perform whatever functions are needed and then defines how these chips should be interconnected to realize a larger logic circuit.
Standard chips were popular for building logic circuits until the early 1980s.
As integrated circuit technology improved, it became inefficient to use valuable space on PCBs for chips with low functionality.
Another drawback of standard chips is that the functionality of each chip is fixed and cannot be changed.
1.1.2. Programmable logic devices:
In contrast to standard chips that have fixed functionality, it is possible to construct chips that contain circuitry which can be configured by the user to implement a wide range of different logic circuits.
These chips have a very general structure and include a collection of programmable switches that allow the internal circuitry in the chip to be configured in many different ways.
The designer can implement whatever functions are required for a particular application by setting the programmable switches as needed.
The switches are programmed by the end user, rather than when the chip is manufactured. Such chips are known as programmable logic devices (PLDs).
PLDs are available in a wide range of sizes, and can be used to implement very large logic circuits. The most commonly-used type of PLD is known as a field-programmable gate array (FPGA).
The largest FPGAs contain billions of transistors and support the implementation of complex digital systems that is the reason FPGAs are widely used today.
Drawback in that the programmable switches consume valuable chip area and limit the speed of operation of implemented circuits.
Thus in some cases FPGAs may not meet the desired performance or cost objectives
Fig. An FPGA board
1.1.3. Custom chips:
It is possible to design a chip from scratch; namely, the logic circuitry that must be included on the chip is designed first and then the chip is manufactured by a company that has the fabrication facilities. This approach is known as custom or semi-custom design, and such chips are often called application-specific integrated circuits (ASICs).
The main advantage of a custom chip is that its design can be optimized for a specific task hence it usually leads to better performance.
It is possible to include a larger amount of logic circuitry in a custom chip than other types of chips.
The cost of producing such chips is high, but if produce in large final cost will become less
A disadvantage of the custom-design approach is that manufacturing a custom chip often takes a considerable amount of time, on the order of months.
1.2. DESIGN PROCESS:
Introduction:
1. The availability of computer-based tools has greatly influenced the design process in a wide variety of environments.
2. Certain steps in the development cycle must be performed if the final product is to meet the specified objectives.
3. The flowchart in Figure depicts a typical development process. We assume that the process is to develop a product that meets certain expectations.
4. The most obvious requirements are that the product must function properly, that it must meet an expected level of performance, and that its cost should not exceed a given target.
Development Process:
1. The process begins with the definition of product specifications. The essential features of the product are identified, and an acceptable method of evaluating the implemented features in the final product is established.
2. The specifications must be tight enough to ensure that the developed product will meet the general expectations, but should not be unnecessarily constraining.
3. From a complete set of specifications, it is necessary to define the general structure of an initial design of the product. This step is difficult to automate. It is usually performed by a human designer because there is no clear-cut strategy for developing a product’s overall structure—it requires considerable design experience and intuition.
4. After the general structure is established, CAD tools are used to work out the details. Many types of CAD tools are available, ranging from those that help with the design of individual parts of the system to those that allow the entire system’s structure to be represented in a computer.
5. When the initial design is finished, the results must be verified against the original specifications.
6. CAD tools enable designers to simulate the behavior of incredibly complex products, and such simulations are used to determine whether the obtained design meets the required specifications. If errors are found, then appropriate changes are made and the verification of the new design is repeated through simulation.
7. Although some design flaws may escape detection via simulation, but the most problems are discovered in this way.
8. When the simulation indicates that the design is correct, a complete physical prototype of the product is constructed.
9. The prototype is thoroughly tested for conformance with the specifications. Any errors revealed in the testing must be fixed. The errors may be minor, and often they can be eliminated by making small corrections directly on the prototype of the product. In case of large errors, it is necessary to redesign the product and repeat the steps explained above.
10. When the prototype passes all the tests, then the product is deemed to be successfully designed and it can go into production.
Fig. The development process.
1.3. DESIGN OF DIGITAL HARDWARE:
1. To understand the role that logic circuits play in digital systems, consider the structure of a typical computer, as illustrated in Figure.
2. The computer case houses a number of printed circuit boards (PCBs), a power supply, and (not shown in the figure) storage units, like a hard disk and DVD or CD-ROM drives. Each unit is plugged into a main PCB, called the motherboard.
3. As indicated in the figure, the motherboard holds several integrated circuit chips, and it provides slots for connecting other PCBs, such as audio, video, and network boards.
Fig. A digital hardware system (part a)
4. Below figure illustrates the structure of an integrated circuit chip. The chip comprises
a number of sub circuits, which are interconnected to build the complete circuit.
5. Examples of sub circuits are those that perform arithmetic operations, store data, or control the flow of data. Each of these sub circuits is a logic circuit.
6. As shown in the middle of the figure, a logic circuit comprises a network of connected logic gates. Each logic gate performs a very simple function, and more complex operations are realized by connecting gates together.
7. Logic gates are built with transistors, which in turn are implemented by fabricating various layers of material on a silicon chip.
Fig. A digital hardware system (part b)
8. Logic Circuit Design can do with the help of different CAD tools like Altera, Cadence, Mentor Graphics, Synopsys, and Xilinx.
1.4. DIGITAL REPRESENTATION OF INFORMATION:
1. Information is represented in logic circuits as electronic signals. Each of these signals can be thought of as representing one digit of information.
2. To make the design of logic circuits easier, each digit is allowed to take on only two possible values, usually denoted as 0 and 1.
3. These logic values are implemented as voltage levels in a circuit; the value 0 is usually represented as 0 V (ground), and the value 1 is the voltage level of the circuit’s power supply. Typical power-supply voltages in logic circuits range from 1 V DC to 5 V DC.
4. In general, all information in logic circuits is represented as combinations of 0 and 1 digits.
5. Before going to logic circuits next, it will be helpful to examine how alphanumeric data (text) numbers,, and other information can be represented using the digits 0 and 1.
1.4.1. Binary Numbers:
1. In the familiar decimal system, a number consists of digits that have 10 possible values, from 0 to 9, and each digit represents a multiple of a power of 10.
2. The number 8547 represents 8×〖10〗^3+5×〖10〗^2+4×〖10〗^1+7×〖10〗^0
3. We do not normally write the powers of 10 with the number, because they are implied by the positions of the digits.
4. In general, a decimal integer is expressed by an n-tuple comprising n decimal digits
D = d_(n-1 ) d_(n-2)…..d_1 d_0Which represents the value
V (D) = d_(n-1 )×〖10〗^(n-1)+ d_(n-2 )×〖10〗^(n-2)+⋯+d_1×〖10〗^1+d_0×〖10〗^0
This is referred to as the positional number representation
5. Because the digits have 10 possible values and each digit is weighted as a power of 10, we say that decimal numbers are base-10 numbers. Decimal numbers are familiar, convenient, and easy to understand.
6. Digital circuits represent information using only the values 0 and 1, it is not practical to have digits that can assume ten values. In these circuits it is more appropriate to use the binary, or base-2, system which has only the digits 0 and 1. Each binary digit is called a bit.
7. In the binary number system, the same positional number representation is used so that
B = b_(n-1 ) b_(n-2)…..b_1 b_0
Represents an integer that has the value
V (B) = b_(n-1 )×2^(n-1)+ b_(n-2 )×2^(n-2)+⋯+b_1×2^1+b_0×2^0
= ∑_(i=0)^(n-1)▒〖b_(i )× 2〗^i
For example, the binary number 1101 represents the value
V = 1 × 2^3 + 1 ×2^2 + 0 ×2^1 + 1 × 2^0
8. Thus to specify that 1101 is a base-2 number, we will write (1101)2. Evaluating the preceding expression for V gives V = 8 + 4 + 1 = 13. Hence
〖(1101)〗_(2 ) = 〖(13)〗_(10 )
Table. Numbers in decimal and binary
9. The range of integers that can be represented by a binary number depends on the number of bits used.
10. Table lists the first 15 positive integers and shows their binary representations using four bits. An example of a larger number is 〖(10110111)〗_(2 ) = 〖(183)〗_(10 ).In general, using n bits allows representation of positive integers in the range 0 to 2^(n-1).
11. In a binary number the right-most bit is usually referred to as the least-significant bit (LSB). The left-most bit, which has the highest power of 2 associated with it, is called the most-significant bit (MSB). In digital systems it is often convenient to consider several bits together as a group. A group of four bits is called a nibble, and a group of eight bits is called a byte.
1.4.2. Conversion between Decimal and Binary Systems:
1. A binary number is converted into a decimal number simply by applying equation and evaluating it using decimal arithmetic.
2. Converting a decimal number into a binary number is not quite as straightforward, because we need to construct the number by using powers of 2.
3. For example, the number 〖(17)〗_(10 ) is 2^4+2^0=〖(10001)〗_(2 ), and the number 〖 (50)〗_(10 ) is 2^5 +2^4 +2^1 = 〖(110010)〗_(2 ).
4. In general, the conversion can be performed by successively dividing the decimal number by 2 as follows. Suppose that a decimal number D = d_(n-1 ) d_(n-2)…..d_1 d_0 with a value V, is to be converted into a binary number B = b_(n-1 ) b_(n-2)…..b_1 b_0 Then, we can write V in the for V = b_(n-1 )×2^(n-1)+ b_(n-2 )×2^(n-2)+⋯+b_1×2^1+b_0×2^0
5. If we now divide V by 2, the result is V/2 = b_(n-1 )×2^(n-2)+⋯+b_2×2^1+b_1+ b_0/2
6. The quotient of this integer division is b_(n-1 )×2^(n-2)+⋯+b_2×2^1+b_1 and the remainder is b_0 ,if the remainder is 0, then b_0 = 0,if it is 1 , then b_0 = 1
7. Observe that the quotient is just another binary number, which comprises n − 1 bits, rather than n bits. Dividing this number by 2 yields the remainder b_1.The new quotient is b_(n-1 )×2^(n-3)+⋯+b_2 .
8. Continuing the process of dividing the new quotient by 2, and determining one bit in each step, will produce all bits of the binary number. The process continues until the quotient becomes 0.
9. Below figure illustrates the conversion process, using the example (857)10 =
(1101011001)2. Note that the least-significant bit (LSB) is generated first and the most significant bit (MSB) is generated last.
Fig. Conversion from decimal to binary.
1.4.3. ASCII Character Code:
1. Alphanumeric information, such as letters and numbers typed on a computer keyboard, is represented as codes consisting of 0 and 1 digits. The most common code used for this type of information is known as the ASCII code, which stands for the American Standard Code for Information Interchange.
2. The code specified by this standard is presented in the following table
Table. The seven-bit ASCII code
3. The ASCII code uses seven-bit patterns to denote 128 different characters. Ten of the characters are decimal digits 0 to 9. As the table shows, the high-order bits have the same pattern, b_6 b_5 b_4 = 011, for all 10 digits. Each digit is identified by the low-order four bits,b_(3-0), using the binary patterns for these digits.
4. The ASCII code is used to encode information that is handled as text. It is not convenient for representation of numbers that are used as operands in arithmetic operations. For this purpose, it is best to convert ASCII-encoded numbers into a binary representation that we discussed before.
5. The ASCII standard uses seven bits to encode a character. In computer systems a more
natural size is eight bits, or one byte. There are two common ways of fitting an ASCII encoded character into a byte. One is to set the eighth bit, b7, to 0. Another is to use this bit to indicate the parity of the other seven bits, which means showing whether the number of 1s in the seven-bit code is even or odd.
1.4.4. Digital and Analog Information:
1. Binary numbers can be used to represent many types of information. For example, they can represent music that is stored in a personal music player.
2. Below figure illustrates a music player, which contains an electronic memory for storing music files. A music file comprises a sequence of binary numbers that represent tones.
3. To convert these binary numbers into sound, a digital-to-analog (D/A) converter circuit is used. It converts digital values into corresponding voltage levels, which create an analog voltage signal that drives the speakers inside the headphones.
4. The binary values stored in the music player are referred to as digital information, whereas the voltage signal that drives the speakers is analog information.
Fig. Using digital technology to represent music.
1.5. INTRODUCTION TO LOGIC CIRCUITS:
1. Logic circuits are used in digital computers.
2. Such circuits also form the foundation of many other digital systems, such as those that perform control applications or are involved in digital communications. All such applications are based on some simple logical operations that are performed on input information.
3. Information in computers is represented as electronic signals that can have two discrete values. Although these values are implemented as voltage levels in a circuit, we refer to them simply as logic values, 0 and 1.
4. Any circuit in which the signals are constrained to have only some number of discrete values is called a logic circuit. Logic circuits can be designed with different numbers of logic values, such as three, four, or even more.
5. The binary logic circuits that have two logic values.
6. Binary logic circuits have the dominant role in digital technology.
1.5.1. Variables and Functions:
1. The dominance of binary circuits in digital systems is a consequence of their simplicity, which results from constraining the signals to assume only two possible values.
2. The simplest binary element is a switch that has two states. If a given switch is controlled by an input variable x, then we will say that the switch is open if x = 0 and closed if x = 1, as illustrated in Figure
Fig. A binary switch
3. Consider a simple application of a switch, where the switch turns a small light bulb on or off. This action is accomplished with the circuit in Figure
4. A battery provides the power source. The light bulb glows when a sufficient amount of current passes through it.
5. The current flows when the switch is closed, that is, when x = 1. In this example the input that causes changes in the behavior of the circuit is the switch control x.
6. The output is defined as the state (or condition) of the light, which we will denote by the letter L. If the light is on, we will say that L = 1. If the light is off, we will say that L = 0.
7. Using this convention, we can describe the state of the light as a function of the input variable x. Since L = 1 if x = 1 and L = 0 if x = 0, we can say that L(x) = x
8. This simple logic expression describes the output as a function of the input. We say that L(x) = x is a logic function and that x is an input variable.
9. Consider now the possibility of using two switches to control the state of the light. Let x_1 and x_2 be the control inputs for these switches. The switches can be connected either in series or in parallel as shown in Figure
Fig. A light controlled by a switch
Fig. Two basic functions
10. Using a series connection, the light will be turned on only if both switches are closed. If either switch is open, the light will be off.
11. This behavior can be described by the expression
L(x_1, x_2) = x_1 • x_2
Where L = 1 if x_1 = 1 and x_2 = 1,
L = 0 otherwise
12. The “•” symbol is called the AND operator, and the circuit in Figure a is said to implement a logical AND function.
13. The parallel connection of two switches is given in Figure b. In this case the light will be on if either the x_1 or x_2 switch is closed. The light will also be on if both switches are closed. The light will be off only if both switches are open.
14. This behavior can be stated as
L(x_1, x_2) = x_1 + x_2
Where L = 1 if x_1 = 1 or x_2 = 1 or if x_1 = x_2 = 1,
L = 0 if x_1 = x_2 = 0.
15. The + symbol is called the OR operator, and the circuit in Figure b is said to implement a logical OR function.
16. Below figure illustrates how three switches can be used to control the light in a more complex way. This series-parallel connection of switches realizes the logic function
L(x_1, x_2, x_3) = (x_1 + x_2) • x_3
The light is on if x3 = 1 and, at the same time, at least one of the x_1 or x2 inputs is equal to 1.
Fig. A series-parallel connection
Inversion:
1. So far we have assumed that some positive action takes place when a switch is closed, such as turning the light on. It is equally interesting and useful to consider the possibility that a positive action takes place when a switch is opened.
Fig. An Inverting circuit.
2. In this case the switch is connected in parallel with the light, rather than in series. Consequently, a closed switch will short-circuit the light and prevent the current from flowing through it.
3. Note that we have included an extra resistor in this circuit to ensure that the closed switch does not short-circuit the power supply. The light will be turned on when the switch is opened.
4. Formally, we express this functional behavior as
L(x) = x
Where L = 1 if x = 0,
L = 0 if x = 1
5. The value of this function is the inverse of the value of the input variable. Instead of using the word inverse, it is more common to use the term complement. Thus we say that L(x) is a complement of x in this example. Another frequently used term for the same operation is the NOT operation.
6. There are several commonly used notations for indicating the complementation.
Thus the following are equivalent:
x ̅= x’ =! x = ∼ x = NOT x
Truth Tables:
1. All the logical operations can also defined in the form of table called a truth table.
2. As shown in Table the first two columns (to the left of the double vertical line) give all four possible combinations of logic values that the variables x1 and x2 can have. The next column defines the AND operation for each combination of values of x1 and x2, and the last column defines the OR operation
Table. A truth table for the AND and OR operation
x_1 x_2 x_1. x_2 (AND) x_1+ x_2 (OR)
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 1
3. The truth table is a useful aid for depicting information involving logic functions. We will use it to define specific functions and to show the validity of certain functional relations.
4. Small truth tables are easy to deal with. However, they grow exponentially in size with the number of variables. A truth table for three input variables has eight rows because there are eight possible valuations of these variables.
5. In general, for n input variables the truth table has 2n rows.
1.5.2. Logic Gates and Networks:
1. A complex function may require many of the basic operations for its implementation. Each logic operation can be implemented electronically with transistors, resulting in a circuit element called a logic gate.
2. A logic gate has one or more inputs and one output that is a function of its inputs. It is often convenient to describe a logic circuit by drawing a circuit diagram, or schematic, consisting of graphical symbols representing the logic gates.
3. The graphical symbols for the AND, OR, and NOT gates are shown in Figure
4. The figure indicates on the left side how the AND and OR gates are drawn when there are only a few inputs. On the right side it shows how the symbols are augmented to accommodate a greater number of inputs.
5. A larger circuit is implemented by a network of gates.
6. For example, the logic function from Figure series-parallel connection can be implemented by the network in below figure. The complexity of a given network has a direct impact on its cost. Because it is always desirable to reduce the cost of any manufactured product, it is important to find ways for implementing logic circuits as inexpensively as possible.
7. A network of gates is often called a logic network or simply a logic circuit.
Fig. The basic gates
Fig. The function from series-parallel connection
Analysis of a Logic Network:
1. For an existing logic network, it must be possible to determine the function performed by the network. This task is referred to as the analysis process.
2. The reverse task of designing a new network that implements a desired functional behavior is referred to as the synthesis process.
3. The analysis process is rather straightforward and much simpler than the synthesis process.
4. Figure shows a simple network consisting of three gates. To analyze its functional behavior, we can consider what happens if we apply all possible combinations of the input signals to it.
5. Suppose that we start by making x_1 = x_2 = 0. This forces the output of the NOT gate to be equal to 1 and the output of the AND gate to be 0. Because one of the inputs to the OR gate is 1, the output of this gate will be 1
6. Therefore, f = 1 if x_1 = x2 = 0. If we then let x_1 = 0 and x_2 = 1, no change in the value of f will take place, because the outputs of the NOT and AND gates will still be 1 and 0, respectively. Next, if we apply x_1 = 1and x_2 = 0, then the output of the NOT gate changes to 0 while the output of the AND gate remains at 0. Both inputs to the OR gate are then equal to 0; hence the value of f will be 0.
7. Finally, let x_1 = x_2= 1. Then the output of the AND gate goes to 1, which in turn causes f to be equal to 1. Our verbal explanation can be summarized in the form of the truth table shown in Figure b.
Fig . An example of logic networks.
Timing Diagram:
1. We have determined the behavior of the network in Figure a by considering the four possible valuations of the inputs x_1 and x2. Suppose that the signals that correspond to these valuations are applied to the network in the order of our discussion; that is, (x_1, x_2) = (0, 0) followed by (0, 1), (1, 0), and (1, 1). Then changes in the signals at various points in the network would be as indicated in blue in the figure. The same information can be presented in graphical form, known as a timing diagram as shown in fig c
2. The time runs from left to right, and each input valuation is held for some fixed duration. The figure shows the waveforms for the inputs and output of the network, as well as for the internal signals at the points labeled A and B.
3. The timing diagram in Figure c shows that changes in the waveforms at points A and B and the output f take place instantaneously when the inputs x_1 and x_2 change their values. These idealized waveforms are based on the assumption that logic gates respond to changes on their inputs in zero time. Such timing diagrams are useful for indicating the functional behavior of logic circuits.
4. Practical logic gates are implemented using electronic circuits which need some time to change their states. Thus, there is a delay between a change in input values and a corresponding change in the output value of a gate.
5. Timing diagrams are used for many purposes. They depict the behavior of a logic circuit in a form that can be observed when the circuit is tested using instruments such as logic analyzers and oscilloscopes.
Functionally Equivalent Networks:
1. Now consider the network in Figure 2.10d. Going through the same analysis procedure, we find that the output g changes in exactly the same way as f does in part (a) of the figure. Therefore, g(x_1, x_2) = f (x_1, x_2), which indicates that the two networks are functionally equivalent.
2. The output behavior of both networks is represented by the truth table in Figure b.Since both networks realize the same function, it makes sense to use the simpler one, which is less costly to implement
3. We have already established this equivalence through detailed analysis of the two circuits and construction of the truth table. But the same outcome can be achieved through algebraic manipulation of logic expressions.
1.6. BOOLEAN ALGEBRA:
1. In 1849 George Boole published a scheme for the algebraic description of processes involved in logical thought and reasoning. Subsequently, this scheme and its further refinements became known as Boolean algebra. It was almost 100 years later that this algebra found application in the engineering sense.
2. Algebra is a powerful tool that can be used for designing and analyzing logic circuits. It provides the foundation for much of our modern digital technology.
Axioms of Boolean algebra:
1. Like any algebra, Boolean algebra is based on a set of rules that are derived from a small number of basic assumptions. These assumptions are called axioms. Let us assume that Boolean algebra involves elements that take on one of two values, 0 and 1.
2. Assume that the following axioms are true:
1a. 0• 0 = 0
1b. 1+ 1 = 1
2a. 1• 1 = 1
2b. 0+ 0 = 0
3a. 0• 1 = 1 • 0 = 0
3b. 1+ 0 = 0 + 1 = 1
4a. If x = 0, then x ̅ = 1
4b. If x = 1, then x ̅ = 0
Single-Variable Theorems:
1. From the axioms we can define some rules for dealing with single variables. These rules are often called theorems.
2. If x is a Boolean variable, then the following theorems hold:
5a. x • 0 = 0
5b. x + 1 = 1
6a. x • 1 = x
6b. x + 0 = x
7a. x • x = x
7b. x + x = x
8a. x •x ̅ = 0
8b. x + x ̅ = 1
9. (x ̅ ) ̅ = x
3. It is easy to prove the validity of these theorems by perfect induction, that is, by substituting the values x = 0 and x = 1 into the expressions and using the axioms given above.
Duality:
1. Given a logic expression, its dual is obtained by replacing all + operators with • operators, and vice versa, and by replacing all 0s with 1s, and vice versa.
2. The dual of any true statement (axiom or theorem) in Boolean algebra is also a true statement
3. Duality implies that at least two different ways exist to express every logic function with Boolean algebra
4. Often, one expression leads to a simpler physical implementation than the other and is thus preferable.
Two- and Three-Variable Properties :
1. To enable us to deal with a number of variables, it is useful to define some two- and three-variable algebraic identities. For each identity, its dual version is also given. These identities are often referred to as properties.
2. They are known by the names indicated below.If x, y, and z are Boolean variables, then the following properties hold:
10a. x • y = y • x Commutative
10b. x + y = y + x
11a. x • ( y • z) = (x • y) • z Associative
11b. x + ( y + z) = (x + y) + z
12a. x • ( y + z) = x • y + x • z Distributive
12b. x + y • z = (x + y) • (x + z)
13a. x + x • y = x Absorption
13b. x • (x + y) = x
14a. x • y + x • y ̅ = x Combining
14b. (x + y) • (x + y ̅) = x
15a. (x.y) ̅ = x ̅ + y ̅ DeMorgan’s theorem
15b. (x+y) ̅ = x ̅ •( y) ̅
16a. x + x ̅ • y = x + y
16b. x • (x ̅ + y) = x • y
17a. x • y + y • z + x ̅ • z = x • y + x ̅ • z Consensus
17b. (x + y) • (y + z) • (x ̅ + z) = (x + y) • (x ̅ + z)
3. Again, we can prove the validity of these properties either by perfect induction or by performing algebraic manipulation.
4. Figure illustrates how perfect induction can be used to prove DeMorgan’s theorem, using the format of a truth table. The evaluation of left-hand and right-hand sides of the identity in 15a gives the same result.
Fig. Proof of DeMorgan’s theorem in 15a
5. We have a number of axioms, theorems, and properties. Not all of these are necessary to define Boolean algebra.
6. The preceding axioms, theorems, and properties provide the information necessary for performing algebraic manipulation of more complex expressions.
Ex: Let us prove the validity of the logic equation
(x_1 + x_3) • ((x_1 ) ̅ + (x_3 ) ̅ ) = x_1• (x_3 ) ̅ + (x_1 ) ̅ •x_3
Sol: The left-hand side can be manipulated as follows. Using the distributive property, 12a, Gives
LHS = (x_1 + x_3) •(x_1 ) ̅ + (x_1 + x_3) •(x_3 ) ̅
Applying the distributive property again yields
LHS = x_1 •(x_1 ) ̅ + x_3 •(x_1 ) ̅ + x_1 •(x_3 ) ̅ + x_3 •(x_3 ) ̅
Note that the distributive property allows ANDing the terms in parenthesis in a way analogous to multiplication in ordinary algebra. Next, according to theorem 8a, the terms x_1 •(x_1 ) ̅ and x_3 • (x_3 ) ̅ are both equal to 0. Therefore,
LHS = 0 + x_3 •(x_1 ) ̅ + x_1•(x_3 ) ̅ + 0
From 6b it follows that
LHS = x_3• (x_1 ) ̅+ x_1 •(x_3 ) ̅
Finally, using the commutative property, 10a and 10b, this becomes
LHS = x_1•(x_3 ) ̅ + (x_1 ) ̅•x_3
Which is the same as the right-hand side of the initial equation.
1.7. SYNTHESIS USING GATES:
1. With some basic ideas, we can implement arbitrary functions using the AND, OR, and NOT gates.
2. The function of the circuit is to continuously monitor the state of the switches and to produce an output logic value 1 whenever the switches (x1, x2) are in states (0, 0),(0, 1), or (1, 1). If the state of the switches is (1, 0), the output should be 0.
3. We can express the required behavior using a truth table, as shown in Figure
Fig. A function to be synthesized
4. A possible procedure for designing a logic circuit that implements this truth table is to create a product term that has a value of 1 for each valuation for which the output function f has to be 1.
5. Then we can take a logical sum of these product terms to realize f
6. Let us begin with the fourth row of the truth table, which corresponds to x_1 = x_2 = 1. The product term that is equal to 1 for this valuation is x_1 •x_2, which is just the AND of x_1 and x_2
7. Next consider the first row of the table, for which x_1 = x_2 = 0. For this valuation the value 1 is produced by the product term (x_1 ) ̅ •(x_2 ) ̅. Similarly, the second row leads to the term (x_1 ) ̅ •x_2.
8. Thus f may be realized as f (x1, x2) = x_1 x_2 +(x_1 ) ̅ (x_2 ) ̅ + (x_1 ) ̅ x_2
9. The logic network that corresponds to this expression is shown in Figure a
10. Although this network implements f correctly, it is not the simplest such network. To find a simpler network, we can manipulate the obtained expression using the theorems and properties.
11. According to theorem 7b, we can replicate any term in a logical sum expression. Replicating the third product term, the above expression becomes
f (x_1, x_2) = x_1 x_2 +(x_1 ) ̅ (x_2 ) ̅ + (x_1 ) ̅ x_2 + (x_1 ) ̅ x_2
12. Now the distributive property 12a allows us to write
f (x_1, x_2) = ( x_1+(x_1 ) ̅)x_2 + ( (x_2 ) ̅ + x_2)(x_1 ) ̅
Fig. Two implementations of the function
13. Applying theorem 8b we get
f (x_1, x_2) = 1. x_2 + 1. (x_1 ) ̅
14. Finally, theorem 6a leads to
f (x_1, x_2) = x_2 + (x_1 ) ̅
15. The network described by this expression is given in Figure b. Obviously; the cost of this network is much less than the cost of the network in part (a) of the figure.
16. Algebraic manipulation used to derive simplified logic expressions and thus lower-cost networks.
17. The process whereby we begin with a description of the desired functional behavior and then generate a circuit that realizes this behavior is called synthesis.Thus we can say that we “synthesized” the networks from the truth table
Ex. From the given truth table derive the simplified function and implement the function
Sol: f = (s_(1 ) ) ̅ s_(1 ) s_2 s_3 (s_(2 ) ) ̅ (s_(3 ) ) ̅
f = (s_(1 ) ) ̅ (s_(2 ) ) ̅ s_3 + (s_(1 ) ) ̅ s_2 s_3 + s_(1 ) (s_(2 ) ) ̅ s_3 + s_1 s_2 (s_(3 ) ) ̅ + s_(1 ) s_2 s_3
We can first use rule 7b to repeat the term s_(1 ) s_2 s_3, and then use the distributive property 12a and rule 8b to simplify the expression.
f = (s_(1 ) ) ̅ (s_(2 ) ) ̅ s_3 + (s_(1 ) ) ̅ s_2 s_3 + s_(1 ) (s_(2 ) ) ̅ s_3 + s_(1 ) s_2 s_3 + s_1 s_2 (s_(3 ) ) ̅ + s_(1 ) s_2 s_3
f = (s_(1 ) ) ̅ s_3 ((s_(2 ) ) ̅ + s_2 ) + s_(1 ) s_3 ((s_(2 ) ) ̅ + s_2 ) + s_(1 ) s_2 ((s_(3 ) ) ̅ + s_3 )
f = (s_(1 ) ) ̅ s_3 + s_(1 ) s_3 + s_(1 ) s_2
f = s_3 ((s_(1 ) ) ̅ + s_(1 ) ) + s_(1 ) s_2
f = s_3 + s_(1 ) s_2
Implementation of the simplified function
s_(1 ) s_2 s_(1 ) s_2 f = s_3 + s_(1 ) s_2
s_3
Sum-of-Products and Product-of-Sums Forms :
1. If a function f is specified in the form of a truth table, then an expression that realizes f can be obtained by considering either the rows in the table for which f = 1,as we have already done, or by considering the rows for which f = 0.
Minterms:
1. For a function of n variables, a product term in which each of the n variables appears once is called a minterm.
2. The variables may appear in a minterm either in uncomplemented or complemented form. For a given row of the truth table, the minterm is formed by including x_(i )if x_(i ) = 1 and by including x_(i )if (x_(i ) ) ̅ = 0.
3. To illustrate this concept, consider the truth table in Figure. We have numbered the rows of the table from 0 to 7, so that we can refer to them easily.
4. From the discussion of the binary number representation in previous section, we can observe that the row numbers chosen are just the numbers represented by the bit patterns of variables x_(1 ), x_(2 ), and x_(3 )
5. The figure shows all minterms for the three-variable table
Fig. Three-variable minterms and maxterms.
6. For example, in the first row the variables have the values x_(1 ) = x_(2 ) = x_(3 ) = 0, which leads to the minterm (x_(1 ) ) ̅ (x_(2 ) ) ̅ (x_(3 ) ) ̅. In the second row x_(1 )= x_(2 ) = 0 and x_(3 ) = 1, which gives the minterm (x_(1 ) ) ̅ (x_(2 ) ) ̅ x_(3 ), and so on.
7. To be able to refer to the individual minterms easily, it is convenient to identify each minterm by an index that corresponds to the row numbers shown in the figure. We will use the notation m_(i ) to denote the minterm for row number i. Thus m_(0 ) = (x_(1 ) ) ̅ (x_(2 ) ) ̅ (x_(3 ) ) ̅, m_(1 ) = (x_(1 ) ) ̅ (x_(2 ) ) ̅ x_(3 ), and so on.
Sum-of-Products Form :
1. Afunction f can be represented by an expression that is a sum of minterms, where each minterm is ANDed with the value of f for the corresponding valuation of input variables.
2. For example, the two-variable minterms are m_(0 )= (x_(1 ) ) ̅ (x_(2 ) ) ̅, m_(1 ) = (x_(1 ) ) ̅x_(2 ), m_(2 )= x_(1 ) (x_(2 ) ) ̅, and m_3 = x_(1 ) x_(2 ). The function in Figure can be represented as
f = m_(0 )•1 + m_(1 )•1 + m_(2 )• 0 + m_3• 1
= m_(0 ) + m_(1 ) + m_3
= (x_(1 ) ) ̅ (x_(2 ) ) ̅ + (x_(1 ) ) ̅ x_(2 ) + x_(1 ) x_(2 )
3. Any function f can be represented by a sum of minterms that correspond to the rows in the truth table for which f = 1. The resulting implementation is functionally correct and unique, but it is not necessarily the lowest-cost implementation of f.
4. A logic expression consisting of product (AND) terms that are summed (ORed) is said to be in the sum-of products (SOP) form. If each product term is a minterm, then the expression is called a canonical sum-of-products for the function f.
5. The first step in the synthesis process is to derive a canonical sum-of-products expression for the given function. Then we can manipulate this expression, using the theorems and properties, with the goal of finding a functionally equivalent sum-of-products expression that has a lower cost.
Ex: Consider the three-variable function f (x_(1 ),x_(2 ),x_(3 )), specified by the truth table in Figure and minimize the function and realize the logic diagram
Fig. A three-variable function
Sol: To synthesize this function, we have to include the minterms m_(1 ,) m_(4 ,) m_5 and m_6 .Copying these minterms from Figure leads to the following canonical sum-of-products expression for f
f (x_(1 ),x_(2 ),x_(3 )) = (x_1 ) ̅ (x_2 ) ̅ x_3 +x_1 (x_2 ) ̅ (x_3 ) ̅ + x_1 (x_2 ) ̅ x_3 + x_1 x_2 (x_3 ) ̅
1. This expression can be manipulated as follows
f (x_(1 ),x_(2 ),x_(3 )) = ((x_1 ) ̅+ x_1)(x_2 ) ̅ x_3 +( (x_2 ) ̅ + x_2 ) x_1 (x_3 ) ̅
f (x_(1 ),x_(2 ),x_(3 )) = 1. (x_2 ) ̅ x_3 + 1. x_1 (x_3 ) ̅
f (x_(1 ),x_(2 ),x_(3 )) = (x_2 ) ̅ x_3 + x_1 (x_3 ) ̅
2. This is the minimum-cost sum-of-products expression for f . It describes the circuit shown in below figure.
3. A good indication of the cost of a logic circuit is the total number of gates plus the total number of inputs to all gates in the circuit.
Fig. A minimal sum-of-products realization
4. A good indication of the cost of a logic circuit is the total number of gates plus the total number of inputs to all gates in the circuit. Using this measure, the cost of the network in Figure is 13, because there are five gates and eight inputs to the gates.
5. By comparison, the network implemented on the basis of the canonical sum-of- products would have a cost of 27
6. Minterms, with their row-number subscripts, can also be used to specify a given function in a more concise form
7. For example, the above function can be specified
f (x_(1 ),x_(2 ),x_(3 )) = ∑▒〖(m_(1 ,) m_(4 ,) m_5,m_6 〗)
or even more simply as
f (x_(1 ),x_(2 ),x_(3 )) = ∑▒〖 m(1,4,5,6〗)
The Ʃ sign denotes the logical sum operation. This shorthand notation is often used in
practice.
Ex: Consider the function f (x_(1 ),x_(2 ),x_(3 )) = ∑▒〖 m(2,3,4,6,7〗)
The canonical SOP expression for the function is derived using minterms
f (x_(1 ),x_(2 ),x_(3 )) = m_2+〖 m〗_(3 )+m_4 〖+ m〗_6+ 〖 m〗_(7 )
f (x_(1 ),x_(2 ),x_(3 )) = (x_1 ) ̅x_2 (x_3 ) ̅ +(x_1 ) ̅x_2 x_3 + x_1 (x_2 ) ̅ (x_3 ) ̅ + x_1 x_2 (x_3 ) ̅ + x_1 x_2 x_3
This expression can be simplified as follows
f (x_(1 ),x_(2 ),x_(3 )) = (x_1 ) ̅x_2 (x_3 ) ̅ +(x_1 ) ̅x_2 x_3 + x_1 (x_2 ) ̅ (x_3 ) ̅ + x_1 x_2 (x_3 ) ̅ + x_1 x_2 (x_3 ) ̅ + x_1 x_2 x_3
(since x_1 x_2 (x_3 ) ̅ + x_1 x_2 (x_3 ) ̅ = x_1 x_2 (x_3 ) ̅ )
f (x_(1 ),x_(2 ),x_(3 )) = (x_1 ) ̅x_2 ((x_3 ) ̅ + x_3) + x_1 (x_3 ) ̅ ( (x_2 ) ̅ + x_2) + x_1 x_2 ((x_3 ) ̅ +x_3)
f (x_(1 ),x_(2 ),x_(3 )) = (x_1 ) ̅x_2 + x_1 (x_3 ) ̅ + x_1 x_2
f (x_(1 ),x_(2 ),x_(3 )) = ((x_1 ) ̅+x_1)x_2 + x_1 (x_3 ) ̅
f (x_(1 ),x_(2 ),x_(3 )) = x_2 + x_1 (x_3 ) ̅
Ex: Minimize the given four –variable function
f (x_(1 ),x_(2 ),x_(3 ),x_4) = ∑▒〖 m(3,7,9,12,13,14,15〗)
Maxterms:
1. The principle of duality suggests that if it is possible to synthesize a function f by considering the rows in the truth table for which f = 1, then it should also be possible to synthesize f by considering the rows for which f = 0. This alternative approach uses the complements of minterms, which are called maxterms.
2. All possible maxterms for three variable functions are listed in Figure. We will refer to a maxterm M_jby the same row number as its corresponding minterm mj as shown in the figure.
Fig. Three-variable minterms and maxterms.
Product-of-Sums Form:
1. If a given function f is specified by a truth table, then its complement f can be represented by a sum of minterms for which f = 1, which are the rows where f = 0. For example, for the function in below Figure
(f (〖 x〗_1,〖 x〗_2) ) ̅ = m_2 = 〖 x〗_1 (x_2 ) ̅
If we complement this expression using DeMorgan’s theorem, the result is (f ) ̅ ̅ = f = (〖 x〗_1 (x_2 ) ̅ ) ̅ = (x_1 ) ̅ + 〖 x〗_2
2. Note that we obtained this expression previously by algebraic manipulation of the canonical sum-of-products form for the function f . The key point here is that
f = (m_2 ) ̅ = 〖 M〗_2
Where 〖 M〗_2 is the maxterm for row 2 in the truth table.
3. As another example, consider the function in below Figure. The complement of this function can be represented as
Fig. A three-variable function
(f (〖 x〗_1,〖 x〗_2,〖 x〗_3) ) ̅ = m_0+〖 m〗_(2 )+m_3 〖+ m〗_7
= (x_1 ) ̅ (x_2 ) ̅ (x_3 ) ̅ +(x_1 ) ̅x_2 (x_3 ) ̅ + (x_1 ) ̅x_2 x_3+ x_1 x_2 x_3
Then f can be expressed as
f = (m_0+〖 m〗_(2 )+m_3 〖+ m〗_7 ) ̅
= (m_0 ) ̅ . (m_2 ) ̅ . (m_3 ) ̅ . (m_7 ) ̅
= M_0 . M_2 . M_3 . M_7
= (x_1+x_2+x_3) (x_1+(x_2 ) ̅ +x_3) (x_1+ (x_2 ) ̅ + (x_3 ) ̅ )( (x_1 ) ̅+(x_2 ) ̅+ (x_3 ) ̅ )
This expression represents f as a product of maxterms.
4. A logic expression consisting of sum (OR) terms that are the factors of a logical product (AND) is said to be of the product-of-sums (POS) form. If each sum term is a maxterm, then the expression is called a canonical product-of-sums for the given function.
5. Any function f can be synthesized by finding its canonical product-of-sums. This involves taking the maxterm for each row in the truth table for which f = 0 and forming a product of these maxterms.
6. We can reduce the complexity of the derived expression that comprises a product of maxterms. Using the commutative property 10b and the associative property 11b ,this expression can be written as
f = ( (x_1+x_3)+x_2) ((x_1+ x_3 )+ (x_2 ) ̅ ) (x_1+ ((x_2 ) ̅ + (x_3 ) ̅ ) )( (x_1 ) ̅+((x_2 ) ̅+ (x_3 ) ̅ ))
7. Then, using the combining property 14b, the expression reduces to
f = (x_1+x_3) ((x_2 ) ̅ + (x_3 ) ̅ )
8. The corresponding network is given in Figure
Fig. A minimal product-of-sums realization
9. The cost of this network is 13
10. Using the shorthand notation, an alternative way of specifying our sample function is
f (x_(1 ),x_(2 ),x_(3 )) = ∏▒〖(M_(0 ,) M_(2 ,) M_3,M_7 〗)
= ∏▒〖 M(0,2,3,7〗)
The ∏▒ sign denotes the logical product operation.
Ex: Consider the function f (x_(1 ),x_(2 ),x_(3 )) =Ʃ m(2, 3, 4, 6, 7) Instead of using the minterms, we can specify this function as a product of maxterms for which f = 0, namely
f (x_(1 ),x_(2 ),x_(3 )) = ∏▒〖 M(0,1,5〗)
Sol: 1. Then, the canonical POS expression is derived as
f = M_0 . M_1 . M_5
f = (x_(1 )+x_(2 )+x_(3 )) ( x_(1 )+x_(2 )+(x_3 ) ̅) ( (x_1 ) ̅+x_(2 )+(x_3 ) ̅)
2. A simplified POS expression can be derived as
f = (x_(1 )+x_(2 )+x_(3 )) ( x_(1 )+x_(2 )+(x_3 ) ̅) ( x_(1 )+x_(2 )+(x_3 ) ̅) ( (x_1 ) ̅+x_(2 )+(x_3 ) ̅)
f = ((x_(1 )+x_(2 ))+x_(3 )) (( x_(1 )+x_(2 ))+(x_3 ) ̅) ( x_(1 )+(x_(2 )+(x_3 ) ̅)) ( (x_1 ) ̅+ (x_(2 )+(x_3 ) ̅))
f = ((x_(1 )+x_(2 ))+ (x_3 ) ̅ x_(3 )) ( x_(1 ) (x_1 ) ̅+(x_(2 )+(x_3 ) ̅))
f = (x_(1 )+x_(2 ))(x_(2 )+(x_3 ) ̅)
3. Another way of deriving this product-of-sums expression is to use the sum-of-products form of f . Thus,
(f (〖 x〗_1,〖 x〗_2,〖 x〗_3) ) ̅ = ∑▒〖 m(0,1,5〗)
= (x_1 ) ̅ (x_2 ) ̅ (x_3 ) ̅ +(x_1 ) ̅(x_2 ) ̅ x_3+ (x_1 ) ̅(x_2 ) ̅ x_3+ x_1 (x_2 ) ̅ x_3
= (x_1 ) ̅ (x_2 ) ̅ ((x_3 ) ̅ + x_3) + (x_2 ) ̅ x_3((x_1 ) ̅ + x_1 )
= (x_1 ) ̅ (x_2 ) ̅ + (x_2 ) ̅ x_3
4. Now, first applying DeMorgan’s theorem 15b, and then applying 15a (twice) gives
(f ) ̅ ̅ = f = (((x_1 ) ̅ (x_2 ) ̅+ (x_2 ) ̅ x_3)) ̅
= ( ((x_1 ) ̅ (x_2 ) ̅)) ̅ (( (x_2 ) ̅ x_3)) ̅
= (x_1+x_2) (x_2 + (x_3 ) ̅ )
NAND and NOR Logic Networks:
1. NAND and NOR functions which are obtained by complementing the output generated by AND and OR operations, respectively.
2. These functions are attractive because they are implemented with simpler electronic circuits than the AND and OR functions.
3. Figure gives the graphical symbols for the NAND and NOR gates. A bubble is placed on the output side of the AND and OR gate symbols to represent the complemented output signal.
4. NAND and NOR gates are realized with simpler circuits than AND and OR gates
Fig. NAND and NOR gates
5. Its logic gate interpretation is shown in below Figure
Fig. DeMorgan’s theorem in terms of logic gates.
6. From DeMorgan’s theorem identity 15a is interpreted in part (a) of the figure. It specifies that a NAND of variables x1 and x2 is equivalent to first complementing each of the variables and then ORing them. Notice on the far-right side that we have indicated the NOT gates simply as bubbles, which denote inversion of the logic value at that point.
7. The other half of DeMorgan’s theorem, identity 15b, appears in part (b) of the figure. It states that the NOR function is equivalent to first inverting the input variables and then ANDing them.
8. Any logic function can be implemented either in sumof- products or product-of-sums form, which leads to logic networks that have either an AND-OR or an OR-AND structure, respectively. We will now show that such networks can be implemented using only NAND gates or only NOR gates.
9. Consider the network in below figure as a representative of general AND-OR networks. This network can be transformed into a network of NAND gates as shown in the figure.
Fig. Using NAND gates to implement a sum-of-products.
10. First, each connection between the AND gate and an OR gate is replaced by a connection that includes two inversions of the signal: one inversion at the output of the AND gate and the other at the input of the OR gate. Such double inversion has no effect on the behavior of the network.
11. The OR gate with inversions at its inputs is equivalent to a NAND gate. Thus we can redraw the network using only NAND gates.
12. This example shows that any AND-OR network can be implemented as a NAND-NAND network having the same topology.
13. Below figure gives a similar construction for a product-of-sums network, which can be transformed into a circuit with only NOR gates.
Fig. Using NOR gates to implement a product-of-sums.
14. The procedure is exactly the same but the AND gate with inversions at its inputs is equivalent to a NOR gate. Thus we can redraw the network using only NOR gates
15. The conclusion is that any OR-AND network can be implemented as a NOR-NOR network having the same topology.
Ex: Let us implement the function f (x_(1 ),x_(2 ),x_(3 )) =Ʃ m(2, 3, 4, 6, 7) using NOR gates only This function can be represented by the POS expression as
f = (x_(1 ) + x_(2 ))( x_(2 ) + (x_3 ) ̅)
An OR-AND circuit that corresponds to this expression is shown in Figure a. Using the same structure of the circuit, a NOR-gate version is given in Figure b. Note that x3 Is inverted by a NOR gate that has its inputs tied together.
Fig. NOR-gate realization of the function
Ex: Let us implement the function f (x_(1 ),x_(2 ),x_(3 )) =Ʃ m(2, 3, 4, 6, 7) using NAND gates only The SOP expression of given function f = x_(2 ) + x_(1 ) (x_3 ) ̅
1. An AND-OR circuit that corresponds to this expression is shown in Figure a. Using the same structure of the circuit, a NAND -gate version is given in Figure b.
Fig. NAND-gate realization of the function
2. The only difference is signal x_(2 ) passes only through an OR gate, instead of passing through an AND gate and an OR gate.
3. If we simply replace the OR gate with a NAND gate, this signal would be inverted which would result in a wrong output value. Since x_(2 ) must either not be inverted, or it can be inverted twice, we can pass it through two NAND gates as shown in Figure b
Observe that for this circuit the output f is
f = ( (x_2 ) ̅ .(( x_1 (x_3 ) ̅ )) ̅ ) ̅
Applying DeMorgan’s theorem, this expression becomes
f = x_(2 ) + x_1 (x_3 ) ̅
1. 8. DESIGN EXAMPLES:
1. Logic circuits provide a solution to a problem. They implement functions that are needed to carry out specific tasks. Within the framework of a computer, logic circuits provide complete capability for execution of programs and processing of data.
2. A designer of logic circuits is always deal with the same basic issues. First, it is necessary to specify the desired behavior of the circuit. Second, the circuit has to be synthesized and implemented. Finally, the implemented circuit has to be tested to verify that it meets the specifications.
Ex: Three-Way Light Control
1. Assume that a large room has three doors and that a switch near each door controls a light in the room. It has to be possible to turn the light on or off by changing the state of any one of the switches.
2. As a first step, let us turn this word statement into a formal specification using a truth table. Let x1, x2, and x3 be the input variables that denote the state of each switch.
3. Assume that the light is off if all switches are open. Closing any one of the switches will turn the light on. Then turning on a second switch will have to turn off the light. Thus the light will be on if exactly one switch is closed, and it will be off if two (or no) switches are closed. If the light is off when two switches are closed, then it must be possible to turn it on by closing the third switch.
4. If f (x_(1 ),x_(2 ),x_(3 )) represents the state of the light, then the required functional behavior can be specified as shown in the truth table in Figure.
Fig. Truth table for the three-way light control.
5. The canonical sum-of-products expression for the specified function is
f = m_1+〖 m〗_(2 )+m_4 〖+ m〗_7
= (x_1 ) ̅ (x_2 ) ̅ x_3+ (x_1 ) ̅x_2 (x_3 ) ̅ + x_1 (x_2 ) ̅ (x_3 ) ̅ + x_1 x_2 x_3
6. This expression cannot be simplified into a lower-cost sum-of-products expression. The resulting circuit is shown in Figure a.
7. An alternative realization for this function is in the product-of-sums form. The canonical expression of this type is
f = M_0 . M_3 . M_5 . M_6
= (x_1+x_2+ x_3 ) (x_1+(x_2 ) ̅+ (x_3 ) ̅ )( (x_1 ) ̅+x_2+(x_3 ) ̅ )( (x_1 ) ̅+ (x_2 ) ̅+ x_3)
8. The resulting circuit is depicted in Figure b
Fig. Implementation of the function
1.9. OPTIMIZED IMPLEMENTATION OF LOGIC FUNCTIONS USING K-MAP:
1. In general, it is not obvious when to apply these theorems and properties to find a minimum-cost circuit, and it is often tedious and impractical to do so. This section introduces a more manageable approach, call the Karnaugh map, which provides a systematic way of producing a minimum-cost logic expression.
2. The Karnaugh map is an alternative to the truth-table form for representing a function. The map consists of cells that correspond to the rows of the truth table.
3. Consider the two-variable example in Figure. Part (a) depicts the truth-table form, where each of the four rows is identified by a minterm. Part (b) shows the Karnaugh map, which has four cells.
4. The columns of the map are labeled by the value of x1, and the rows are labeled by x2. This labeling leads to the locations of minterms as shown in the figure
5. Compared to the truth table, the advantage of the Karnaugh map is that it allows easy recognition of minterms that can be combined using property 14a.
Fig . Location of two-variable minterms.
6. Minterms in any two cells that are adjacent, either in the same row or the same column, can be combined. For example, the minterms m2 and m3 can be combined as
m_2+〖 m〗_(3 ) = x_1 (x_2 ) ̅ + x_1 x_2
= x_1 ((x_2 ) ̅+x_2 )
= x_1 . 1 = x_1
7. The Karnaugh map is not just useful for combining pairs of minterms. the Karnaugh map can be used directly to derive a minimum-cost circuit for a logic function.
Two-Variable Map:
1. A Karnaugh map for a two-variable function is given in Figure. It corresponds to the function f shown in the truth table
Fig. The function of above table
2. The value of f for each valuation of the variables x_1and x_2 is indicated in the corresponding cell of the map. Because a 1 appears in both cells of the bottom row and these cells are adjacent, there exists a single product term that can cause f to be equal to 1 when the input variables have the values that correspond to either of these cells
3. To indicate this fact, we have circled the cell entries in the map. Rather than using the combining property formally, we can derive the product term without thinking.
4. Both of the cells are identified by x_2 = 1, but x_1 = 0 for the left cell and x_1 = 1 for the right cell. Thus if x_2 = 1, then f = 1 regardless of whether x_1 is equal to 0 or 1. The product term representing the two cells is simply x_2.
5. Similarly, f = 1 for both cells in the first column. These cells are identified by x_1 = 0. Therefore, they lead to the product term (x_1 ) ̅. Since this takes care of all instances where f = 1, it follows that the minimum-cost realization of the function is
f = x_2 + (x_1 ) ̅
6. Evidently, to find a minimum-cost implementation of a given function, it is necessary to find the smallest number of product terms that produce a value of 1 for all cases where f = 1.
7. Note that a product term that covers two adjacent cells is cheaper to implement than a term that covers only a single cell.
Three-Variable Map:
1. A three-variable Karnaugh map is constructed by placing 2 two-variable maps side by side. Figure a lists all of the three-variable minterms, and part (b) of the figure indicates the locations of these minterms in the Karnaugh map.
Fig. Location of three-variable minterms.
2. In this case each valuation of x_1 and 〖 x〗_2 identifies a column in the map, while the value of x_3 distinguishes the two rows. To ensure that minterms in the adjacent cells in the map can always be combined into a single product term, the adjacent cells must differ in the value of only one variable
3. Thus the columns are identified by the sequence of (x1, x2) values of 00, 01, 11, and 10, rather than the more obvious 00, 01, 10, and 11. This makes the second and third columns different only in variable x1. Also, the first and the fourth columns differ only in variable x1, which means that these columns can be considered as being adjacent.
4. It is useful to visualize the map as a rectangle folded into a cylinder where the left and the right edges in Figure are made to touch. (A sequence of codes, or valuations, where consecutive codes differ in one variable only is known as the Gray code.)
5. Figure shows the function and function in Karnaugh-map form. To synthesize this function, it is necessary to cover the four 1s in the map as efficiently as possible.
(a)
(b)
Fig. Examples of three-variable Karnaugh maps
6. The first covers the 1s in the top row, which are represented by the term x_1 (x_3 ) ̅ . The second term is (x_2 ) ̅ x_3 which covers the 1s in the bottom row. Hence the function is implemented as
f = x_1 (x_3 ) ̅ + (x_2 ) ̅ x_3
7. In a three-variable map it is possible to combine cells to produce product terms that correspond to a single cell, two adjacent cells, or a group of four adjacent cells.
8. Realization of a group of four adjacent cells using a single product term is illustrated in Figure b, using the function shown in the same Figure b.
9. The four cells in the top row correspond to the f (x_1, x_2, x_3) valuations 000, 010, 110, and 100. As we discussed before, this indicates that if x_3 = 0, then f = 1 for all four possible valuations of x_1 and x_2, which means that the only requirement is that x_3 = 0. Therefore, the product term (x_3 ) ̅ represents these four cells.
10. The remaining 1, corresponding to minterm m_5, is best covered by the term x_1 (x_2 ) ̅ , obtained by combining the two cells in the right-most column. The complete realization of f is
f = (x_3 ) ̅ +x_1 (x_2 ) ̅
11. It is also possible to have a group of eight 1s in a three-variable map. This is the trivial case of a function where f = 1 for all valuations of input variables; in other words, f is equal to the constant 1
Four-Variable Map:
1. A four-variable map is constructed by placing 2 three-variable maps together to create four rows in the same fashion as we used 2 two-variable maps to form the four columns in a three-variable map.
2. Figure shows the structure of the four-variable map and the location of minterms.
Fig. A four-variable Karnaugh map.
3. From the fig thus x_1 = 1 for the two right-most columns, x_2 = 1 for the two middle columns, x_3 = 1 for the bottom two rows, and x_4 = 1 for the two middle rows.
4. Below figure gives four examples of four-variable functions. The function f_1 has a group of four 1s in adjacent cells in the bottom two rows, for which x_2 = 0 and x_2 = 1,they are represented by the product term (x_2 ) ̅x_3.
5. This leaves the two 1s in the second row to be covered, which can be accomplished with the term x_1 (x_3 ) ̅x_4. Hence the minimum-cost implementation of the function is
f_1 = (x_2 ) ̅x_3 + x_1 (x_3 ) ̅x_4
Fig. Examples of four-variable Karnaugh maps.
6. The function f_2 includes a group of eight 1s that can be implemented by a single term, x3.
7. Note that if the remaining two 1s were implemented as a group of two, the result would be the product term x_1 (x_3 ) ̅x_4. Implementing these 1s as a part of a group of four 1s, as shown in the figure, gives the less expensive product term x_1 x_4.
8. Just as the left and the right edges of the map are adjacent in terms of the assignment of the variables, so are the top and the bottom edges. Indeed, the four corners of the map are adjacent to each other.
9. The four corners of the map are adjacent to each other and thus can form a group of four 1s, which may be implemented by the product term (x_2 ) ̅ (x_4 ) ̅. This case is depicted by the function f_3.
10. In addition to this group of 1s, there are four other 1s that must be covered to implementf_3. This can be done as shown in the figure.
11. The function f_(4 ) provides an example where there is some choice. The groups of four 1s in the top-left and bottom-right corners of the map are realized by the terms(x_1 ) ̅ (x_3 ) ̅ and x_1 x_3, respectively.
This leaves the two 1s that correspond to the term x_1 x_2 (x_3 ) ̅. But these two 1s can be realized more economically by treating them as a part of a group of four 1s.
12. They can be included in two different groups of four, as shown in the figure. One choice leads to the product term x_1 x_2, and the other leads to x_2 (x_3 ) ̅. Both of these terms have the same cost; hence it does not matter which one is chosen in the final circuit.
13. Note that the complement of x3 in the term x_2 (x_3 ) ̅ does not imply an increased cost in comparison with x1x2, because this complement must be generated anyway to produce the term (x_1 ) ̅ (x_3 ) ̅, which is included in the implementation.
Five-Variable Map:
1. We can use 2 four-variable maps to construct a five-variable map. It is easy to imagine a structure where one map is directly behind the other, and they are distinguished by x_5 = 0 for one map and x_5 = 1 for the other map. Since such a structure is awkward to draw, we can simply place the two maps side by side as shown in Figure
Fig. A five-variable Karnaugh map.
2. For the logic function given in this example, two groups of four 1s appear in the same place in both four-variable maps; hence their realization does not depend on the value of x_5. The same is true for the two groups of two 1s in the second row.
3. The 1 in the top-right corner appears only in the right map, where x_5 = 1; it is a part of the group of two 1s realized by the term x_1 (x_2 ) ̅ (x_3 ) ̅x_5 .
4. Using a five-variable map is obviously more awkward than using maps with fewer variables.
5. Extending the Karnaugh map concept to more variables is not useful from the practical point of view.
Strategy for Minimization:
1. For minimization of the logic function we are going to use an approach to decide how the 1s in a Karnaugh map should be grouped together to obtain the minimum-cost implementation of a given function.
2. Our strategy was to find as few as possible and as large as possible groups of 1s that cover all cases where the function has a value of 1.
3. Each group of 1s has to comprise cells that can be represented by a single product term. The larger the group of 1s, the fewer the number of variables in the corresponding product term.
Terminology:
To facilitate the presentation of the results, certain terminology has evolved that avoids the need for using highly descriptive phrases. We define some of this terminology in the following paragraphs because it is useful for describing the minimization process.
Literal:
A given product term consists of some number of variables, each of which may appear either in uncomplemented or complemented form. Each appearance of a variable, either uncomplemented or complemented, is called a literal. For example, the product term x_1 (x_2 ) ̅x_3 has three literals, and the term (x_1 ) ̅x_3 (x_4 ) ̅x_6 has four literals
Implicant
1. A product term that indicates the input valuation(s) for which a given function is equal to 1 is called an implicant of the function. The most basic implicants are the minterms, For an n-variable function, a minterm is an implicant that consists of n literals.
2. Consider the three-variable function in Figure. There are 11 implicants for this function. This includes the five minterms: (x_1 ) ̅ (x_2 ) ̅ (x_3 ) ̅ , (x_1 ) ̅ (x_2 ) ̅ x_3 , (x_1 ) ̅ x_(2 ) (x_3 ) ̅, (x_1 ) ̅ x_(2 ) x_3, and x_1 x_(2 ) x_3.
Fig. Three-variable function f (x_1,x_(2 ),x_3) = Ʃ m(0, 1, 2, 3, 7).
3. Then there are the implicants that correspond to all possible pairs of minterms that can be combined, namely, (x_1 ) ̅ (x_2 ) ̅ (m_0and m_1), (x_1 ) ̅ (x_3 ) ̅ (m_0and m_2), (x_1 ) ̅ x_3 (m_1and m_3), (x_1 ) ̅ x_(2 ) (m_2and m_3), and x_(2 ) x_3 (m_3and m_7). Finally, there is one implicant that covers a group of four minterms, which consists of a single literal x1.
Prime Implicant:
1. An implicant is called a prime implicant if it cannot be combined into another implicant that has fewer literals. Another way of stating this definition is to say that it is impossible to delete any literal in a prime implicant and still have a valid implicant.
2. In above figure there are two prime implicants: (x_1 ) ̅ and x_(2 ) x_3. It is not possible to delete a literal in either of them.
3. Another way of thinking about prime implicants is that they represent “the largest groups of 1s” that can be circled in the Karnaugh map.
Cover
A collection of implicants that account for all valuations for which a given function is equal to 1 is called a cover of that function.
Cost
Cost of a logic circuit is the number of gates plus the total number of inputs to all gates in the circuit.
Essential Prime Implicant:
If a prime implicant includes a minterm for which f = 1 that is not included in any other prime implicant, then it must be included in the cover and is called an essential prime implicant. In the example in above Figure both prime implicants are essential.
Incompletely Specified Functions:
1. In digital systems it often happens that certain input conditions can never occur. For example, suppose that x_1 and x_(2 ) control two interlocked switches such that both switches cannot be closed at the same time.
2. Thus the only three possible states of the switches are that both switches are open or that one switch is open and the other switch is closed.
3. Namely, the input valuations (x_(1 ),x_2) = 00, 01, and 10 are possible, but 11 is guaranteed not to occur. Then we say that (x_(1 ),x_2) = 11 is a don’t-care condition, meaning that a circuit with x_1 and x_(2 ) as inputs can be designed by ignoring this condition
4. Don’t-care conditions, or don’t-cares for short, can be used to advantage in the design of logic circuits. Since these input valuations will never occur, the designer may assume that the function value for these valuations is either 1 or 0, whichever is more useful in trying to find a minimum-cost implementation.
5. Below figure illustrates this idea. The required function has a value of 1 for minterms m_2 , m_4, m_5, m_6, and m_10. Assuming the abovementioned interlocked switches, the x_1 and x_(2 )inputs will never be equal to 1 at the same time; hence the minterms m_12 , m_13, m_14 and m_15 can all be used as don’t-cares. The don’t cares are denoted by the letter d in the map. Using the shorthand notation, the function f is specified as
f (x_1, . . . , x_(4 )) = Ʃm(2, 4, 5, 6, 10) + D(12, 13, 14, 15)
Where D is the set of don’t-cares.
Fig. Two implementations of the function f (x_1, . . . , x_(4 )) =
Ʃm(2, 4, 5, 6, 10) + D(12, 13, 14, 15)
6. Part (a) of the figure indicates the best sum-of-products implementation. To form the largest possible groups of 1s, thus generating the lowest-cost prime implicants, it is necessary to assume that the don’t-cares D12, D13, and D14 (corresponding to minterms m12, m13, and m14) have the value of 1 while D15 has the value of 0
7. Then there are only two prime implicants, which provide a complete cover of f . The resulting implementation is
f = x2(x_3 ) ̅ + x3(x_4 ) ̅
8. Part (b) shows how the best product-of-sums implementation can be obtained. The same values are assumed for the don’t cares. The result is
f = (x2+x_3 ) ( (x_3 ) ̅ + (x_4 ) ̅ )
9. The freedom in choosing the value of don’t-cares leads to greatly simplified realizations. If we were to naively exclude the don’t-cares from the synthesis of the function, by assuming that they always have a value of 0, the resulting SOP expression would be
f = (x_1 ) ̅ x_2 (x_3 ) ̅ + (x_1 ) ̅ x_3 (x_4 ) ̅ + (x_2 ) ̅ x_3 (x_4 ) ̅
and the POS expression would be
f = (x2+x_3 ) ( (x_3 ) ̅ + (x_4 ) ̅ ) (x1 + (x_2 ) ̅ )
10. Both of these expressions have higher costs than the expressions obtained with a more appropriate assignment of values to don’t-cares.
11. Although don’t-care values can be assigned arbitrarily, an arbitrary assignment may not lead to a minimum-cost implementation of a given function.
12. If there are k don’t-cares, then there are 2k possible ways of assigning 0 or 1 value to them. In the Karnaugh map we can usually see how best to do this assignment to find the simplest implementation.
1.10. MINIMIZATION USING QUINE -MCCLUSKEY (TABULAR) METHOD:
The K-map method is suitable for simplification of Boolean functions up to 5 or 6 variables. As the number of variables increases beyond this, the visualization of adjacent squares is difficult as the geometry is more involved. The ‘Quine-Mccluskey’ or ‘Tabular’ method is employed in such cases. This is a systematic step by step procedure for minimizing a Boolean expression in standard form.
Procedure for Finding the Minimal Expression:
Arrange all minterms in groups, such that all terms in the same group have same number of 1’s in their binary representation. Start with the least number of 1’s and continue with grouping of increasing number of 1’s, the number of 1’s in each term is called the index of that term i.e., all the minterms of same index are placed in a same group. The lowest value of index is zero. Separate each group by a thick line. This constitutes the I stage.
Compare every term of the lowest index (say i) group with each term in the successive group of index (say, i + 1). If two minterms differ in only one variable, that variable should be removed and a dash (–) is placed at the position, thus a new term with one less literal is formed. If such a situation occurs, a check mark (✔) is placed next to both minterms. After all pairs of terms with indices i and (i + 1) have been considered, a thick line is drawn under the last terms. When the above process has been repeated for all the groups of I stage, one stage of elimination have been completed. This constitutes the II stage.
The III stage of elimination should be repeated of the newly formed groups of second stage. In this stage, two terms can be compared only when they have dashes in same positions. The process continues to next higher stages until no further comparisons are possible. (i.e., no further elimination of literals).
All terms which remain unchecked (No ✔ sign) during the process are considered to be prime implicants (PIs). Thus, a set of all PIs of the function is obtained.
From the set of all prime implicates, a set of essential prime implicants (EPIs) must be determined by preparing prime implicant chart as follow.
The PIs should be represented in rows and each minterm of the function in a column.
Crosses should be placed in each row corresponding to minterms that makes the PIs.
A complete PIs chart should be inspected for columns containing only a single cross. PIs that cover minterms with a single cross in their column are called EPIs.
The minterms which are not covered by the EPIs are taken into consideration and a minimum cover is obtained from the remaining PIs.
If don't care conditions are given, they are also used to find the prime implicating, but it is not compulsory to include them in the final simplified expression.
Example 1. Simplify the given function using tabular method.
F (A, B, C, D) = ∑ m(0, 2, 3, 6, 7, 8, 10, 12, 13)
Solution.
1. The minterms of the function are represented in binary form. The binary
represented are grouped into a number of sections in terms of the number of 1’s index as
shown in Table.
2. Compare each binary term with every term in the adjacent next higher category.
If they differ only by one position put a check mark and copy the term into the next
column with (–) in the place where the variable is unmatched, which is shown in
next Table.
3. Apply same process to the above resultant column of Table and continue until
no further elimination of literals. This is shown in Table.
4. All terms which remain unchecked are the PIs. However note that the minterms combination (0, 2) and (8, 10) form the same combination (0, 2, 8, 10) as the combination (0, 8) and (2. 10). The order in which these combinations are placed does not prove any effect. Moreover, as we know that x + x = x, thus, we can eliminate one of these combinations. The same occur with combination (2, 3) and (6, 7).
5. Now we prepare a PI chart to determine EPIs as follows shown in Table.
All the PIs are represented in rows and each minterm of the function in a
column.
Crosses are placed in each row to show the composition of minterms that
make PIs.
The columns that contains a single cross, the PI corresponding to the row in which the cross appear is essential. Prime implicant. A tick mark is part against each column which has only one cross mark. A star (*) mark is placed against each. EPI.
6. All the minterms have been covered by EPIs.
Finally, the sum of all the EPIs gives the function in its minimal SOP form
Therefore, F = ABC' + B'D' + A'C.
Example 2. Simplify the given function using tabular method.
F(A, B, C, D) = ∑ (0, 2, 3, 6,7)+d (5, 8, 10, 11, 15)
Solution:
1. The minterms of the function are represented in binary form. The binary
represented are grouped into a number of sections in terms of the number of 1’s index as
shown in Table. The don’t care minterms are also included.
2. Compare each binary term with every term in the adjacent next higher category.
If they differ only by one position put a check mark and copy the term into the next
column with (–) in the place where the variable is unmatched, which is shown in table.
3. Apply same process to the above resultant column of Table and continue until
no further elimination of literals. This is shown in Table.
4. All the terms which remain unchecked are PIs. Moreover, one of two same combinations is eliminated.
5. Step 5 is to prepare a PI chart to determine EPIs as shown in Table.
Note, however, that don’t care minterms will not be listed as column headings in
the chart as they do not have to be covered by the minimal (simplified) expression.
6. All the minterms have been covered by EPIs.
Therefore F (A, B, C, D) = B'D' + A'C
No comments:
Post a Comment