The memory architecture


Our special thanks goes to our project supervisor, Mrs Vidhu Bhasin whose timely guidance and support at crucial environments upheld undertaking of this project an enriching learning experience. We would also thank the module designer for designing such an assignment that made us explore many unvisited knowledge and clearing various concepts of CSA.

And finally we appreciate the APPIT labs and the library for all the faculties which they provided as supporting materials which helped towards the completion of the project.


In CSA assignment, we have selected the topic "MEMORY ARCHITECTURE". The reason for choosing the above topic is that it is the fundamental part of a computer system. We had a great interest in getting the knowledge about the different components of the computer hierarchy which includes the RAM, Cache memory and registers. We have provided the materials regarding the different aspects which were mentioned the question which had been provided to us.

During the research work we had came across different terms and materials and we gained knowledge about the part on which we were preceding our work. We have performed our work according to the specific format which had been given to us. Each section of our work contains firstly its introduction and then its description. We have preceded our work according to the Gantt chart we had prepared and the research work regarding the different topics had been done in that particular manner. We hope that the work which we are presenting to our respected module lecturer meets her requirement and she gets satisfied with the hard labour which we have performed with great interest and enthusiasm.


In CSA project we selected the topic "Memory Architecture". It covers different Memory References like RAM, Caches and Registers. The CPU processor utilizes these locations according to the scenario it tackles.

Registers is the fast location in all the Memories. It is mainly used by the processor to carry out its own process, it stores Memory address of the current process, the process to execute, loop counter variable etc.

Cache stores the frequently or recently used data or information. It works as an intermediate between the CPU and main memory RAM.

RAM is the location used by the processor to load the program being executed. It is the very memory used for processing. It is larger in size than Registers and Cache Memory.

RAM is slower to be accessed than Registers and Cache.

The need of Memory hierarchy lies in the fact that it helps to understand the levels of different memory storages. It is designed to take advantage ofmemory localityincomputer programs.

When Processor requests for some data or instruction firstly Cache level1 is accessed to load the required data. In case it required data is not found it further searches in Cache level2. If there is a miss search begins in RAM memory. If here also required data is not found it access the Hard disk to fetch it from there. Once data is fetched it is fed to RAM, further Cache level 2 then Level 1 and finally it goes to the CPU processor. In this way the recent data is fed to caches. Further if same data is required later it is directly accessed from the cache itself. Thus it greatly reduces the time complexity.


A small, high-speed permanent memory storage, that holds values of internal operations, such as the address of the instruction being executed and the data being processed. It is available on the CPU whose contents can be accessed more quickly than storage available elsewhere.


  • Computer cannot directly perform arithmetic operation in the memory so it uses registers to perform the same task.
  • It increases the speed of the processing of a data as the frequently used data are stored in the registers.
  • It also stores information about the current status of CPU and currently executing program.
  • The data stored in the register can be shifted or rotated left or right by one or more bits.


PC: Program Counter: Stores the address of themacro instruction currently being executed

AC: Accumulator: Stores a previously calculated value or a value loaded from the main memory.

IR: Instruction Register: Stores a copy of the instruction loaded from main memory.

TIR: Temporary Instruction Register. The CPU evaluates exactly what an instruction is supposed to do so only the important instruction is stored in TIR

IA: The CPU cannot directly access a number so to access it is loaded in this register or the main memory. Therefore this register is set aside to represent this often used number.

AMASK: Address Mask. When the CPU needs to know the address of a target word that an instruction is using, the AMASK is storing the instruction to eliminate theopcode, leaving only the desired address. If that didn't make sense, then it depends on the discussion ofmacro-instructionslater on.

MAR: Memory Address Register. It contains the address of the place the CPU wants to work within the main memory. It is directly connected to the RAM chips on the motherboard.

MBR: Memory Buffer Register. The word that was either loaded from main memory or that is going to be stored in main memory is stored in this register. It is also directly connected to the RAM chips on the motherboard.


32 Bit Registers

While using registers it is very important to understand the size o registers and the data that you can place in them .There are three native data sizes that can be used by the normal integer instructions,BYTE,WORD, andDWORDcorresponding to 8-bit, 16-bit and 32-bit.

A 32-bit Intel or compatible processor has three native data

This can be shown in HEX notation.

WORD 00 00
DWORD 00 00 00 00

In terms of registers, this corresponds to the three sizes that can be addressed with the normal integer registers.Older code that uses 8 and 16-bit registers are compatible with Intel and Compatible processors and it is done by accessing any of the general purpose registers in three different ways. Using the EAX register as an example:

AL or AH =8 bit
AX = 16 bit
EAX = 32 bit

This is the schematic of a general purpose 32-bit register:

64- Bit Registers

64-bit systemsprovide substantial benefits in terms of processing speed. In 32-bit computing, integer math uses 32-bit wide general-purpose registers. With 64-bit computing, each general-purpose register is 64-bits wide and can represent a much larger integer. High-level languages, such as C and C++, support 64-bit mathematical operations on 32-bit processors by splitting a 64-bit number across two 32-bit registers. The 64-bit integer types (such as int64_t, sometimes called "long long" on 32-bit systems) can be contained within a single register on a 64-bit machine. This register-width difference produces a substantial difference in resource requirements when performing 64-bit math.


The Register Organization consist of standard Program Counter or instruction pointer (PC) 64-bit register, and also a standard Status or flags (ST) 32-bit register.

It has a Memory Window (MW) 64-bit register, which contains the base address of the active memory window. The memory window has a set of 32 8-byte blocks which can be accessed using short versions of the standard instructions. Data pointed to by a memory window is usually already in the L0 data cache, providing zero-latency accesses. In register organization there are 32 possible memory windows that can be accessed with an 8KB L0 data cache.


A cache is a form of SRAM that is easily and faster to be accessed by the processor.

Cache memory is hidden from the programmer and appears as part of the system's memory space.

It keeps record of

  • The copy of the data and instructions currently used by the CPU.
  • Data and pages frequently to be used.

More Details-

  • It is a very fast memory than Main memory or RAM.
  • It is relatively small andexpensive.
  • Waiting states are significantly reduced and processor works more effectively.
  • Access time is decreased greatly.
  • A typical access time for SRAM is 15 ns. Hence it allows small portions of main memory to be accessed 3 to 4 times faster than DRAM (main memory).
  • This boosts the power of slower, larger memory.

Reasons for Cache

  • Cache increases efficiency of processor by accelerating CPU instruction retrieval process and reducing its waiting time.
  • Cache acts as intermediate between processor and RAM or Secondary Memory. The processor firstly search the required information or data in fast cache and if it does not find it further proceed to comparatively slower RAM or Secondary storage.
  • In absence of cache, the CPU has to retrieve instruction directly from the RAM. Since RAM operates at a much lower speed than the CPU (18ns to 0.3 ns), and full efficiency of the CPU is not utilized.

Working of cache memory

When processor (CPU) seeks for a piece of information, firstly place it looks it in the level 1 cache, since it is the fastest. If it finds it there (called a hit on the cache), it uses it with no performance delay. If not, the level 2 cache is searched. If it finds it there (level 2 "hit"), it is uses it with relatively little delay. Otherwise, it will issue a request to read it from the system RAM. The system RAM may in turn either have the information available or have to get it from the still slower hard disk or CD-ROM.

Cache Read Operation

The basic principle that cache technology is based upon islocality of reference

  • Temporal locality- The Referred address is very likely to be referred again.
  • Spatial locality- The Referred address is close proximity to be referred soon.
  • Sequentiality- It means that the future memory access is very likely to be in sequential order with the current access.


The cache is usually labelled accordingto its distance from the processor.

Level 1 or primary cache- Primary cache (limited in size, implemented using Static RAM (SRAM)) is the fastest form of storage, built on the same platform with the processor inside theCPUand is used for temporary storage of instructions and data organized in blocks of 32 bytes.

Level 2 or secondary cache-L2 cache stores more data than the L1. It is also implemented inSRAM. It typically comes in two sizes, 256KB or 512KB, and is soldered onto the motherboard, in a Card Edge Low Profile (CELP) socket or more recently, on aCOAST module.

Level 3 cache- A memory bank built onto the motherboard or within the CPU module. The L3 cache feeds the L2 cache, and its memory is typically slower than the L2 memory, but faster than main memory.

Cache Organization

Cache Page

These two terms best describe the cache organization, cache page and cache line. Main memory is divided into equal pieces called cache pages3. The size of a page is dependent on the size of the cache and how the cache is organized. A cache page is broken into smaller pieces.


This organizational scheme in which Main memory and cache memory are divided into lines of equal size (not using cache pages and proving best performance) allows any line in main memory to be stored at any location in the cache.

The disadvantage is that, it consumes time as the current address is to be compared with all the addresses present in the TRAM. So it is used for small caches, typically less than 4K.

Direct Map

In this scheme, (also known as 1-Way set associative cache) main memory is divided into cache pages (Each page is equal to the cache in size). It stores only a specific line of memory within the same line of cache. It is less complex and far less expensive than the other caching schemes.

The disadvantage is that it is far less flexible thus making the performance much lower, especially when jumping between cache pages.

Set Associative

It is a hybrid between a fully associative cache, and direct mapped cache.

In this the appropriate set for a given address (which is like the direct mapped scheme) is found and then within the set the appropriate slot (which is like the fully associative scheme) is traced.

Strategies for Cache Memories

  • Cache hit: -
  • Cache hit is the successful recovery of requested data from the cache. The more cache hits, the faster the computer will operate. The larger the cache, the more chance that a particular file will be in cache.

  • Write through cache: -
  • It is a disk or memory cache that supports the caching of writing. Data written by the CPU to memory or to disk is also written into the cache. Later the read operation needs that same data, read performance is improved, because the data are already in the high-speed cache.

  • Write back cache: -
  • It is a catching method in which modifications to data in the cache are not copied to the cache source until absolutely necessary. Write-back caching is available on many microprocessors, including all Intel processors since the 80486. With these microprocessors, data modifications to data stored in the L1 cache are not copied to main memory until absolutely necessary.

  • Cache miss: -
  • A cache miss is a failure of attempt to read or write data and instructions in cache these may be

    • Instruction read miss
    • Data read miss
    • Data write miss



The most important part of a computer is its primary memory known as "The RAM". RAM (Random Access Memory). It is considered "random access" because memory cell can be accessed directly if row and column is known. It is the working memory storage that stores information a program requires to run while it is in running mode. It is used by the system to store data in the form of files for processing by the CPU. RAM is expressed in MB or GB. The 32-bit versions ofWindows XPandWindows Vistasupport up to 4GB of RAM, but can only use about 3.5GB in practice.


Ram is an essential element in computer system as it acts as the primary memory. It is the place in a computer where theOS, application programs, and data in current use are kept so that they can be quickly reached by the computer'sprocessor. The basic reason for using RAM in the system is that is much faster to read from and write to than the other kinds of storage in a computer, the hard disk, floppy disk, and CD-ROM. Higher the RAM memory lesser is the processing time to read data in from the hard disk. The access time of RAM is expressed in nanoseconds whereas that of hard disk is in milliseconds.


Random Access Memory works like a leaky bucket. The RAM needs to be changed frequently or it will discharge to nothing. The new memory goes into the RAM and the older memory gets automatically deleted to make room for the new memory. You can access any part of the memory if you know the row and column that intersect that cell. The small capacitors in are arranged in banks of 8. Some capacitors are given a charge and given the value of 1 and if it doesn't have a charge it is given a value of 0. Each 1 and 0 is a RAM and eight 1s and 0s is called a byte.


There are basically two types of RAM. They are static RAM and Dynamic RAM.


SRAM stands for Static RAM.

SRAM continues to remember its content; It consists of off/on switches (flip-flops)

SRAM can respond much faster in an efficient manner.


DRAM must be refreshed every few milliseconds. It consists of micro capacitors. It is by far the cheapest to build. Hence, Newer and faster DRAM types are developed continuously.

There are three main types of memory in common use today. SDRAM ( Synchronous Dynamic RAM), DDR-SDRAM (Double Data Rate SDRAM) and RDRAM (RAMBUSDynamic RAM).

Various forms of RAM are stated below:-

  1. SDRAM: -
  2. SDRAM stands for "Synchronous Dynamic Random Access Memory" and it can run at speeds of up to 133 megahertz per second. SDRAM differs from other RAM due to the synchronized interface. While other types of RAM are responsive based on what information is passed along control inputs, SDRAM synchronizes with your computer's system bus and runs at the same time.

  3. DDR SDRAM:-
  4. Double Data Rate (DDR) RAM is essentially SDRAM that works twice as fast. It does this by manipulating the computer's clock cycle in a more efficient manner in order to increase speed and functionality.

  5. DDR2 SDRAM:-
  6. DDR2 SDRAM operates on the same principles as DDR SDRAM, but in a more efficient manner. The DDR2 RAM operates at faster speeds by manipulating the memory rate speeds to achieve higher efficiency.

  7. DRAM:-
  8. Dynamic Random Access Memory (DRAM) works by constantly renewing the information stored in its databanks. If the information is not constantly renewed then deletion or data corruption can occur..

  9. RDRAM:-
  10. Rambus Dynamic Random Access Memory (RDRAM) is a new technology of DRAM created by Rambus Inc. RDRAM differs from other types of DRAM due to its increased bandwidth, which allows it to operate at potentially higher speeds

Memory Hierarchy

The memory hierarchy is the hierarchical arrangement of storage in modern computer architectures. Each level of the hierarchy has the properties of higher speed, smaller size, and lower latency than lower levels.

In modern programming languages there is use of two levels of memory, main memory and disk storage (However Registers may be accessed directly in assembly language and in inline assembler in languages such as C )

  • Programmers are responsible for the movement of data between disk and memory through file I/O.
  • Hardware is responsible for movement of data between caches and memory.
  • Optimizing compilers generate code that, when executed, will cause the hardware to use caches and registers efficiently.

Requirements Memory Hierarchy

  • It help in understanding the levels of different memory storages
  • Understanding Memory hierarchy concept illustrates how data is rolled in different hierarchy.
  • It is designed to take advantage ofmemory localityincomputer programs.
  • The processor seeks for the required data or instructions in hierarchical order from top to bottom.
  • The access time of processor increases greatly.
  • Moving often-referenced data into fast memory and leaving less-used data in slower memory.
  • Waiting time of CPU decreases.
  • It helps in understanding how by CPU, data is firstly searched in registers then in case of "miss" it is searched in cache level 1 after that level 2 then in RAM and finally in Hard disk until data is not found.

Comparison in processor Accessing

  • Registers- Fastest (only hundreds of bytes in size)
  • L1 cache- Accessed in just a few cycles (usually tens of KB)
  • L2 cache- Higher latency than L1 (usually 512 KiB or more)
  • L3 cache- Higher latency than L2 (often several MiB or more)
  • Main Memory- It takes hundreds of cycles (Multiple GB)
  • Disk storage- Millions of cycles
  • Tertiary Storage-Tape, Optical disk

Working of Memory Hierarchy

The whole idea of the memory hierarchy is the taking advantage of the principle of locality of reference to move often-referenced data into fast memory and leave less-used data in slower memory. The selection of often-used versus lesser-used data varies over the execution of any given program. Therefore, data cannot be simply placed at various levels in the memory hierarchy throughout the execution of the program. During the program's execution, the memory subsystems need to be able to move data between themselves dynamically every time to adjust for changes in locality of reference.

It is strictly the function of program to move data between the registers and the rest of the memory hierarchy. The program, of course, firstly loads data into registers and stores register data into memory using instructions like MOV. Selecting an instruction sequence that keeps heavily referenced data in the registers as long as possible is programmer's or compiler's responsibility.

In case neither the Level 1 nor Level 2 Cache subsystems have a copy of the data, then the memory subsystem goes to main memory to get the data. If found in main memory, then the memory subsystems copy this data to the Level 2 Cache which passes it to the Level 1 Cache which gives it to the CPU. Once again, the data is now in the Level 1 Cache, so any references to this data in the near future will come from the Level 1 Cache.

Data movement in a memory hierarchy

If the data is not found in main memory, but is present in Virtual Memory on some storage device, the operating system starts searching, reads the data from disk (or other devices, such as a network storage server) and places this data in main memory. Main memory then passes this data through the caches to the CPU.

Because of locality of reference, the maximum percentage of memory accesses takes place in the Level 1 Cache system. The next second largest percentage of accesses occurs in the Level 2 Cache subsystems. The most infrequent and rare accesses take place in Virtual Memory.


It has been a exciting journey of completing the whole assignment in which we came across different ups and down but lastly we completed our assignment gaining knowledge about the different aspects of the computer system and its architecture. After completing the CSA assignment, in the memory architecture we learnt the working and functioning of different memory storage like Registers, cache memory and the RAM. Among the following the registers have the fastest processing followed by the cache and RAM. We also learned about the memory hierarchy in which we came across the sequence of storage of data in the memory. The assignment helped us to learned management techniques about the memory parts and its hierarchy. The whole study about the different memory hierarchy and different storage devices like the cache, ram and the registers would surely help us in the upcoming future as it is the fundamental part of the computer system architecture.



  • (Accessed on 20th Feb 2010 at 5:30pm)
  • (Accessed on 27th Feb 2010 at 9:30pm)
  • (Accessed on 20th Feb 2010 at 5:45pm)
  • (Accessed on 30th March 2010 at 10:30pm)
  • (Accessed on 30th March 2010 at 10:30pm)
  • (Accessed on 18th Feb 2010 at 3:30pm)
  •,,sid9_gci212882,00.html (Accessed on 30th March 2010 at 10:30pm)


  • (Accessed on 21th Feb 2010 at 6:30pm)
  • (Accessed on 28th Feb 2010 at 6:30pm)
  • (Accessed on 30th March 2010 at 10:40pm)
  • (Accessed on 18th Feb 2010 at 3:40pm)
  • (Accessed on 28th Feb 2010 at 6:40pm)
  • (Accessed on 30th March 2010 at 10:15pm)
  • (Accessed on 21th Feb 2010 at 7:30pm)
  • (Accessed on 30th Feb 2010 at 10:36pm)


  • (Accessed on 28th Feb 2010 at 6:40pm)
  • (Accessed on 20th Feb 2010 at 7:30pm)
  • (Accessed on 27th Feb 2010 at 6:50pm)
  • (Accessed on 30th March 2010 at 10:50pm)
  • (Accessed on 22th Feb 2010 at 7:30pm)
  • (Accessed on 27th Feb 2010 at 6:50pm)
  • (Accessed on 30th March 2010 at 10:45pm)

Memory Hierarchy

  • (Accessed on 20th Feb 2010 at 4:30pm)
  • (Accessed on 28th Feb 2010 at 3:30pm)
  • (Accessed on 30th March 2010 at 10:40pm)
  • (Accessed on 27th Feb 2010 at 3:30pm)
  • (Accessed on 30th March 2010 at 10:20pm)
  • (Accessed on 28th Feb 2010 at 6:30pm)

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!