Fig 3: 4 Transistor – SRAM cell
The complexity of the 4T cell is to make a resistor load high enough (in the range of giga-ohms) to minimize the current. However, this resistor must not be too high to guarantee good functionality. Despite its size advantage, the 4T cells have several limitations
1. Each cell has current flowing in one resistor. (i.e., the SRAM has a high standby current)
2. The cell is sensitive to noise and soft error because the resistance is so high
3. The cell is not as fast as the 6T cell.
6T (Six Transistor) Cell
A different cell design that eliminates the above limitations is the use of a CMOS flip-flop. In this case, the load is replaced by a PMOS transistor. This SRAM cell is composed of six transistors, one NMOS transistor and one PMOS transistor for each inverter, plus two NMOS transistors connected to the row line (as shown in fig 2). This configuration is called a 6T Cell. This cell offers better electrical performances (speed, noise immunity, standby current) than a 4T structure.
TFT (Thin Film Transistor) Cell
This new structure reduces the current flow through the resistor load of the old 4T cell. This change in electrical characteristics of the resistor load is done by controlling the channel of a transistor. This resistor is configured as a PMOS transistor and is called a thin film transistor (TFT). It is formed by depositing several layers of polysilicon above the silicon surface. The source/channel/ drain is formed in the polysilicon load. The gate of this TFT is polysilicon and is tied to the gate of the opposite inverter as in the 6T cell architecture. The oxide between this control gate and the TFT polysilicon channel must be thin enough to ensure the effectiveness of the transistor. The performance of the TFT PMOS transistor is not as good as a standard PMOS silicon transistor used in a 6T cell.
Fig 4: Thin Film Transistor (TFT) SRAM cell
This type of cell posses’ complex technology compared to the 4T cell technology and poor TFT electrical characteristics compared to a PMOS transistor.
In addition to such SRAM types, other kinds of SRAM chips use 8T, 10T, or more transistors per bit. This is sometimes used to implement more than one (read and/or write) port, which may be useful in certain types of video memory and register files implemented with multi ported SRAM circuitry. Memory cells that use fewer than 6 transistors such as 3T or 1T cells are DRAM, not SRAM.
Classification of SRAM by transistor type:
1. Bipolar junction transistor (used in TTL and ECL): very fast but consumes a lot of power
2. MOSFET (used in CMOS): low power and very common today
Classification of SRAM by function:
1. Asynchronous: independent of clock frequency; data in and data out are controlled by address transition.
2. Synchronous: As computer system clocks increased, the demand for very fast SRAMs necessitated variations on the standard asynchronous fast SRAM. The result was the Synchronous SRAM (SSRAM). Synchronous SRAMs have their read or write cycles synchronized with the microprocessor clock and therefore can be used in very high-speed applications. An important application for synchronous SRAMs is cache SRAM used in PCs. SSRAMs typically have a 32 bit output configuration while standard ASRAMs have typically a 8 bit output configuration. All timings are initiated by the clock edge(s). Address, data in and other control signals are associated with the clock signals.
Classification of SRAM by feature:
1. ZBT (zero bus turnaround): the turnaround is the number of clock cycles it takes to change access to the SRAM from write to read and vice versa. The turnaround for ZBT SRAMs or the latency between read and writes cycle is zero. In short the ZBT is designed to eliminate dead cycles when turning the bus around between read and writes and reads.
2. Sync-Burst (synchronous-burst SRAM): features synchronous burst write access to the SRAM to increase write operation to the SRAM.
3. DDR SRAM: Synchronous, single read/write port, double data rate IO. It increases the performance of the device by transferring data on both edges of the clock.
4. Quad Data Rate SRAM: Synchronous, separate read & write ports, double data rate IO
5. Pipelined SRAM: They (also called register to register mode SRAM) add a register between the memory array and the output. Pipelined SRAMs are less expensive than standard ASRAMs for equivalent electrical performance. The pipelined design does not require the aggressive manufacturing process of a standard ASRAM.
6. Late-Write SRAM: Late-write SRAM requires the input data only at the end of the cycle.
Each bit in an SRAM is stored on four transistors that form two cross-coupled inverters (as shown in Fig 2). This storage cell has two stable states, which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. A typical SRAM uses six MOSFETs to store each memory bit and the explanation here is based on the same.
Access to the cell is enabled by the word line which controls the two access transistors M5 and M6 which, in turn, control whether the cell should be connected to the bit lines: -BL and BL. They are used to transfer data for both read and write operations. Although it is not strictly necessary to have two bit lines, both the signal and its inverse are typically provided to improve noise margins.
During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This improves SRAM bandwidth compared to DRAMs. In a DRAM, the bit line is connected to storage capacitors and charge sharing causes the bit-line to swing upwards or downwards. The symmetric structure of SRAMs also allows for differential signaling, which makes small voltage swings more easily detectable. Another difference with DRAM that contributes to making SRAM faster is that commercial chips accept all address bits at a time. By comparison, commodity DRAMs have the address multiplexed in two halves, i.e. higher bits followed by lower bits.
An SRAM cell has three different states it can be in:
1. Standby where the circuit is idle
2. Reading when the data has been requested
3. Writing when updating the contents
Standby: If the word line is not asserted, the access transistors M5 and M6 disconnect the cell from the bit lines. The two cross-coupled inverters formed by M1 – M4 will continue to reinforce each other as long as they are disconnected from the outside world.
Read operation: Assume that the content of the memory is a 1, stored at Q. The read cycle is started by pre-charging both the bit lines to a logical 1, then asserting the word line WL, enabling both the access transistors. The second step occurs when the values stored in Q and -Q are transferred to the bit lines by leaving BL at its pre-charged value and discharging -BL through M1 and M5 to a logical 0. On the BL side, the transistors M4 and M6 pull the bit line toward VDD, a logical 1. If the content of the memory was a 0, the opposite would happen and -BL would be pulled toward 1 and BL toward 0.
Write operation: The value to be written is applied to the bit-lines to start the writing operation. To write a 0, we would apply a 0 to the bit lines, i.e. setting -BL to 1 and BL to 0. This is similar to applying a reset pulse to a SR-latch, which causes the flip-flop to change state. A 1 is written by inverting the values of the bit lines. WL is then asserted and the value that is to be stored is latched in. Note that the reason this works is that the bit line input-drivers are designed to be much stronger than the relatively weak transistors in the cell itself, so that they can easily override the previous state of the cross-coupled inverters. Careful sizing of the transistors in an SRAM cell is needed to ensure proper operation.
To work properly and to ensure that the data in the cell will not be altered, the SRAM must be supplied by a Vdd (power supply) that will not fluctuate beyond plus or minus five or ten percent of the main VCC. If the cell is not disturbed, a lower voltage level is acceptable to ensure that the cell will correctly keep the data. In that case, the SRAM is set to a retention mode where the power supply is lowered, and the part is no longer accessible.
The power consumption of SRAM varies depending on how frequently it is accessed; it can be as power-hungry as dynamic RAM, when used at high frequencies. On the other hand, SRAM used at a somewhat slower pace, such as in applications with moderately clocked microprocessors, draw very little power and can have nearly negligible power consumption when sitting idle.
The pin connections common to all type of memory devices (including SRAM) are the address inputs, data I/O, some type of selection input and at least one control input used to select a read or write operation.
Fig 5: Basic memory component connections
The address inputs are used to connect or select a memory location within the memory device. The memory device that has 10 address lines will be having its address pins labeled from A0 (Least Significant) to A9. The number of memory address pins found on a memory device is determined by the number of memory locations found within it. The data I/O connections are the points at which the data are entered for storage or extracted for reading. Today the memory devices are equipped with bi-directional common data I/O lines.
The SRAM has an input that selects or enables the memory device, called chip select (CS). If this pin is active (a logic 0 applied at this pin) the memory device performs a read or a write operation.
The other two control inputs associated with SRAM are Write Enable (WE) and Output (also called read enable) Enable (OE). Sometimes the (WE) is labeled as (W) and the (OE) is labeled as (G). The write enable pin must be made active (applying logic 0) to perform a memory write operation and the (OE) must be active to perform a read operation from the memory. But they must never both be active at the same time.
Fig 6 shows a typical functional block diagram and a typical pin configuration of an asynchronous SRAM (from cypress).