Powered By

Free XML Skins for Blogger

Powered by Blogger

Saturday, July 19, 2008

MEMORY BUS


MEMORY BUS

The memory bus is used to transfer information between the cpu and main memorythe RAM in your system. This bus is usually connected to the motherboard chipset North Bridge or Memory Controller hub chip. Depending on the type of memory your chipset (and therefore motherboard) is designed to handle, the North Bridge runs the memory bus at various speeds. The best solution is if the memory bus runs at the same speed as the processor bus. Some computers also have a back side bus which connects the CPU to a memory cache. This bus and the cache memory connected to it are faster than accessing the system RAM via the front side bus. The maximum theoretical bandwidth of the front side bus is determined by the product of its width, its clock frequency and the number of data transfers it performs per clock tick. For example, a 32-bit (4-byte) wide FSB with a frequency of 100 MHz that performs 4 transfers/tick has a maximum bandwidth of 1600 MB/second. The number of transfers per tick is dependent on the technology used, with (for example) GTL+ offering 2 transfers/tick, EV6 4 transfers/tick, and AGTL+ 8 transfers/tick.

Systems that use PC133 SDRAM have a memory bandwidth of 1066MBps, which is the same as the 133MHz CPU bus. In another example, Athlon systems running a 266MHz processor bus also run PC2100 DDR-SDRAM, which has a bandwidth of 2133MBpsexactly the same as the processor bus in those systems. Systems running a pentium 4 with its 400MHz processor bus also use dual-channel RDRAM memory, which runs 1600MBps for each channel, or a combined bandwidth (both memory channels run simultaneously) of 3200MBps, which is exactly the same as the Pentium 4 CPU bus. Pentium 4 systems with the 533MHz bus run dual-channel DDR PC2100 or PC2700 modules, which match or exceed the throughput of the 4266MBps processor bus.

Running memory at the same speed as the processor bus negates the need for having cache memory on the motherboard. That is why when the L2 cache moved into the processor, nobody added an L3 cache to the motherboard. Some very high-end processors, such as the Itanium and Itanium 2 and the Intel Pentium 4 Extreme Edition, have integrated 2MB,4MB of full-core speed L3 cache into the CPU. However, the most recent high-performance chips, such as the new Pentium Extreme Edition, use only L1 and L2 cache. Thus, it appears that L2 cache will continue to be the most common type of secondary cache for the foreseeable future.


History and Current usage

The front side bus has been a part of computer architecture since applications first started using more memory than a CPU could reasonably hold.

Most modern front side buses serve as a backbone between the CPU and a chipset. This chipset (usually a combination of a northbridge and a southbridge) is the connection point for all other buses in the system. Buses like the PCI, AGP, and memory buses all connect to the chipset to allow for data to flow between the connected devices. These secondary system buses usually run at speeds derived from the front side bus speed.

In response to AMD's Torrenza initiative, Intel has opened up its FSB CPU socket to third party devices . Prior to this announcement, made in Spring 2007 at Intel Developer Forum in Beijing, Intel had very closely guarded who had access to FSB, only allowing Intel processors in the CPU socket. This is now changing, the first example being FPGA co-processors, a result collaborations of Intel-Xilinx-Nallatech and Intel-Altera-XtremeData .

Related Component Speeds

CPU

The frequency at which a processor (CPU) operates is determined by applying a clock multiplier to the front side bus (FSB) speed. For example, a processor running at 550 MHz might be using a 100 MHz FSB. This means there is an internal clock multiplier setting (also called bus/core ratio) of 5.5; the CPU is set to run at 5.5 times frequency of the front side bus: 100 MHz × 5.5 = 550 MHz. By varying either the FSB or the multiplier, different CPU speeds can be achieved.

Memory

Setting a FSB speed is related directly to the speed grade of memory that a system must use. The memory bus connects the northbridge and RAM, just as the front side bus connects the CPU and northbridge. Often, these two buses must operate at the same frequency. Increasing the front-side bus to 170 MHz means also running the memory at 170 MHz in most cases.

In newer systems, it is possible to see memory ratios of "4:5" and the like. The memory will run 5/4 times as fast as the FSB in this situation, meaning a 133 MHz bus can run with the memory at 166 MHz. This is often referred to as an 'asynchronous' system. It is important to realize that, due to differences in CPU and system architecture, overall system performance can vary in unexpected ways with different FSB-to-memory ratios.

In complex image, audio, video, gaming, and scientific applications where the data set is large, FSB speed becomes a major performance issue. A slow FSB will cause the CPU to spend significant amounts of time waiting for data to arrive from system memory.

Peripheral Buses

Similar to the memory bus, the PCI and AGP buses can also be run asynchronously from the front side bus. In older systems, these buses operated at a set fraction of the front side bus frequency. This fraction was set by the BIOS. In newer systems the PCI, AGP, and PCI Express peripheral buses often receive their own clock signals, which eliminates their dependence on the front side bus for timing.

Overclocking

Overclocking is the practice of making computer components operate beyond their stock performance levels.

Many motherboards allow the user to manually set the clock multiplier and FSB settings by changing jumpers or BIOS settings. Many CPU manufacturers now "lock" a preset multiplier setting into the chip. It is possible to unlock some locked CPUs; for instance, some Athlons can be unlocked by connecting electrical contacts across points on the CPU's surface. For all processors, increasing the FSB speed can be done to boost processing speed.

This practice pushes components beyond their specifications and may cause erratic behaviour, overheating or premature failure. Even if the computer appears to run normally, under heavy load, problems may appear. For example, during Windows Setup, you may receive a file copy error or experience other problems . Most PCs purchased from retailers or manufacturers, such as Hewlett-Packard or Dell, do not allow the user to change the multiplier or Front Side Bus settings due to the probability of erratic behavior or failure. Motherboards purchased separately to build a custom machine are more likely to allow the user to edit the multiplier and FSB settings in the PC's BIOS.

Pros and Cons

The front side bus as it is traditionally known may be disappearing. Originally, this bus was a central connecting point for all system devices and the CPU. However, in recent years this has been breaking down with increasing use of individual point-to-point buses. The front side bus has recently been criticized by AMD as being an old and slow technology that bottlenecks today's computer systems. While a faster CPU can execute individual instructions faster, this is wasted if it can't fetch instructions and data as fast as it can execute it; when this happens, the CPU must wait for one or more clock cycles until the memory returns its value. Further, a fast CPU can be delayed when it must access other devices attached to the FSB. Thus, a slow FSB can theoretically become a bottleneck that slows down a fast CPU. In reality, todays most technologically advanced desktop CPUs do not use the front side bus architecture to its fullest extent, and the bottleneck does not become a problem in server equipment until eight or even sixteen CPUs are placed on one FSB.

Furthermore, although the front side bus architecture is an aging technology, it does have the advantage of high flexibility and low cost. There is no theoretical limit to the number of CPUs that can be placed on a FSB, and although performance will not scale perfectly linearly across additional CPUs (due to the architecture's bandwidth bottleneck), the benefits from multithreading due to the sheer number of processors obtainable through this architecture fully outweighs the cost of bandwidth loss.

No comments: