Skip to main content

Posts

Showing posts from December, 2021

Parallel Processors

Computer architecture parallelism is explained with task-level parallelism and is also called process-level parallelism. Task-level parallelism is “utilizing multiple processors by running independent programs simultaneously” (Patterson & Hennessy, 2014, sect. 6.1). The overall goal of computer architecture parallelism is better performance and better energy efficiency. The basic purpose of a multiprocessor is to speed up how many instructions it can completed within a clock cycle. According to Patterson and Hennessy (2014) the difficulty lies with writing correct programs to take full advantage multiprocessing. When programs are not written correctly it is similar to using instruction level parallelism with a uniprocessor.  To increase the speed of a multiprocessor, three things must be achieved. Strong scaling, weak scaling, and load balancing. Strong scaling is speeding up the multiprocessor while not making the instruction larger, and weak scaling is increasing the size of the

Single and Multiprocessor Applications

The goal of a multiprocessor is to raise the computing power when compared to a uniprocessor. The issue is having correct programs written to take advantage of multiprocessor or multicore multiprocessors. A reason as to why programs need to be written correctly to get better performance is because “otherwise, you would just use a sequential program on a uniprocessor, as sequential programming is simpler” (Patterson & Hennessy, 2014, sect. 6.2). If a program is not designed to take advantage of a multiprocessor, then it runs as a sequential program for a uniprocessor. Multiprocessors are mainly considered a shared memory multiprocessor (SMP) because it “shares a single physical address space” (Patterson & Hennessy, 2014, sect. 6.1).   A real-world example of how an application developed for a uniprocessor can impact the performance of executing the same application on a multiprocessor architecture is within different Windows operating systems. Early versions of Windows had diffe

Computer Caches

The basics of a cache are to store files on the server of the most frequently used central memory locations. A cache is a safe place for hiding or storing things. It helps improve response time when retrieving the files. If most memory accesses are to cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.  There are many ways to measure and improve cache performance. One technique is reducing the miss rate by decreasing the probability that two different memory blocks will contend for the exact cache location. Another technique reduces the miss penalty by adding a level to the hierarchy. The memory hierarchy separates computer storage into a hierarchy based on response time. It arranges the memory in a hierarchy way to efficiently improve slower devices such as the register and cache memory. Each of the various components can be viewed as part of a hierarchy of memories.  A virtual machine, commonly shorte