Computer architecture parallelism is explained with task-level parallelism and is also called process-level parallelism. Task-level parallelism is “utilizing multiple processors by running independent programs simultaneously” (Patterson & Hennessy, 2014, sect. 6.1). The overall goal of computer architecture parallelism is better performance and better energy efficiency. The basic purpose of a multiprocessor is to speed up how many instructions it can completed within a clock cycle. According to Patterson and Hennessy (2014) the difficulty lies with writing correct programs to take full advantage multiprocessing. When programs are not written correctly it is similar to using instruction level parallelism with a uniprocessor. To increase the speed of a multiprocessor, three things must be achieved. Strong scaling, weak scaling, and load balancing. Strong scaling is speeding up the multiprocessor while not making the instruction larger, and weak scaling is increasing the size of the
This blog was created as a way to document my journey into the world of technology and cybersecurity.