Computers that handled multiple threads at the same time became the one's that were widely used but what is multithreading and CPU utilization? And how does multithreading affect CPU utilization? These are the questions that are answered in this assignment.

The creation of a new thread of execution inside an existing process rather than initiating a new process is known as multi-threading. The advantages of Multithreading are, if a particular thread gets numerous cache misses then other threads can make use of the unused computing resources and this would lead to faster execution of tasks. Another advantage is if a particular thread does not need certain computing resources of the CPU, letting other threads to use the unused resource does not let the resource to remain idle.

The disadvantage is that multithreading can cause interference to each other especially when the share resources are such as caches or TLBs. Multithreading in computers requires the use of thread switching hardware, which results in degradation of thread execution. Support for multithreading in hardware is very much visible in software and as such changes are required in operating systems and application programs to accommodate the support for multithreading in hardware (Technical Users Guide, 2003).

The different types of multithreading are block multithreading, interleaved multithreading, simultaneous multithreading, implicit and explicit multithreading.

In block multithreading, a thread is blocked by an event such as a latency stall, then instead of waiting for the latency stall to be resolved, the multi-threaded processor starts execution of another thread that is waiting to be executed. In interleaved multi-threading the same thread occupies the same pipe stage and as such there are no dependencies or multi-cycle dependencies.

Simultaneous multithreading is the containing of instructions from different threads in the same cycle in a given pipe stage (Glew, 2011). Implicit multithreading is about multiple subordinate processes executing different threads under the control of a single sequencer unit (Ungerer, Robic, Silc, 2002). Explicit multithreaded processors are able to increase the performance of multiprogramming workload and their single-threaded performance might decrease when compared to a single-threaded processor.

The time that the CPU utilizes for processing instructions of a computer program in comparison to doing input/output or I/O operations is known as CPU time or CPU utilization time.

Multithreading a processor improves the effective utilization of the processor. Multithreaded processors switch to a new thread and perform some useful work when other threads wait for some memory response or some synchronization signals. As such there is optimum utilization of CPU resources because all those resources that are not utilized by one thread are utilized by other threads awaiting execution.

 

Introduction

In today's world where multitasking and simultaneous execution of tasks is more of a necessity than a fashion, it has become imperative that computers to do multiple tasks at the same time. This involved handling multiple threads or programs simultaneously and computers that handled multiple threads at the same time became the ones that were widely used. In an atmosphere of multithreading how effectively the CPU's of the computers are utilized is a question that is uppermost in the minds of most computer engineers. Is there optimum utilization of CPU resources or is there poor use of CPU resources in a multi-threaded environment is the question that is being attempted to be answered in this assignment. Before answering this question we will first try to understand what multi-threading is all about, what does it mean by CPU utilization and finally how multi-threading affects CPU utilization and if there is optimum CPU utilization in a multi-threaded environment.

What is multithreading?

The creation of a new thread of execution inside an existing process rather than initiating a new process is known as multithreading. The purpose of multithreading is to make wise and efficient use of computer resources by allowing resources that are in use to simultaneously utilize different variants of the same process. Multithreading is a different form of time-division multiplexing where a program is configured to allow processes to split into different threads of execution. The simultaneous execution of two or more threads within the same program constitutes an efficient use of the computer's resources especially on desktops and laptops. By allowing a single program to handle multiple tasks using a multithreading model, there is no need for a system to allow two separate programs to initiate two separate processes to make use of the same files and resources at the same time (Tatum, 2011).

To facilitate support for multithreading there should be support from the computer's hardware also and modern computer hardware supports efficient execution of multiple threads. Today's multithreading computers support execution of multiple threads but the main thing to be noted is that all the threads share the same core, that is, they use the same computing units, the CPU caches and the Translation Look-aside Buffer (TLB). Thus there is an increased utilization of a single core by thread-level and instruction-level parallelism.

Through the use of multithreading, computers can do multiple tasks among multiple threads, through the use of optimization techniques overall system throughput for all tasks can be increased and this is a big benefit.

Some of the advantages of Multithreading are, if a particular thread gets numerous cache misses then other threads can make use of the unused computing resources and this would lead to faster execution of tasks as otherwise these resources would have remained idle if the other threads had to wait for the completion of execution of one thread. Another advantage is if a particular thread does not need certain computing resources of the CPU, letting other threads to use the unused resource does not let the resource to remain idle. When more than one thread uses the same set of data they may then be using the same cache and this would lead to better cache usage.

The disadvantage is that multithreading can cause interference to each other especially when the share resources are such as caches or TLBs. Multithreading in computers requires the use of thread switching hardware, which results in degradation of thread execution. Support for multithreading in hardware is very much visible in software and as such changes are required in operating systems and application programs to accommodate the support for multithreading in hardware (Technical Users Guide, 2003). The different types of multithreading are block multi-threading, interleaved multithreading, simultaneous multithreading, implicit and explicit multithreading.

Block Multithreading

When a thread is blocked by an event such as a latency stall, then instead of waiting for the latency stall to be resolved, the multi-threaded processor starts execution of another thread that is waiting to be executed. This is known as block multithreading (Technical Users Guide, 2003).

A block multithreading solution responds to the latency stall by executing instructions of a second thread and this way the execution pipelines of the processor can be utilized fully. Block multi-threading helps system designers to use small instruction caches and the slow external memory while managing to get the best overall performance. Block multithreading is very efficient for CPU-based applications that reside in cache memory and applications that reside in the chip memory. Block multithreading supports fast interrupt response that is required for most embedded systems and the costs for such multi-threaded processors are less.

Interleaved Multithreading

In interleaved multi-threading the same thread occupies the same pipe stage and as such there are no dependencies or multi-cycle dependencies. Multiple interleaved instructions from different threads are known as temporal multithreading and it is further divided into fine-grain multithreading or coarse-grain multithreading depending on the frequency of interleaved issues. In fine-grain multithreading such as barrel processing, instructions are issued for different threads after every cycle whereas coarse-grain multithreading switches to issue instructions from a different thread when the current thread causes some latency events like for example a page fault. For less context switch between threads, coarse-grain multithreading is widely used (Shar, Davidson, 1974).

Don't wait until tomorrow!

You can use our chat service now for more immediate answers. Contact us anytime to discuss the details of the order

Place an order

Simultaneous Multithreading

Simultaneous multithreading is the containing of instructions from different threads in the same cycle in a given pipe stage (Glew, 2011). Besides Temporal multithreading, Simultaneous Multithreading or SMT are the two main implementations of multithreading. More than one instruction can be contained in the pipeline in the case of simultaneous multithreading at any given point in time and this is done without many changes to the design of the processor. The additions that are required are the ability to fetch instructions from many threads in the cycle and bigger register to hold data from these multiple threads. The number of concurrent threads can be decided upon by the designers but limitations on chip design has it that it utmost feasible to have only two concurrent threads for any SMT implementation. SMT is basically an efficiency solution and attempting measure or accepting a particular solution is difficult but research has shown that it is possible to have extra threads for shared resources like caches and to improve the performance of a single thread. SMT has proven to be not just an efficiency solution but also as a solution for redundant computation and error detection and recovery (Tullsen, Eggers, Levy, 1995).

Implicit Multithreading

Any architecture that can concurrently execute many threads from a sequential program is known as implicit multithreading. The threads may be procured with or without help from the compiler. An example of such an approach is the multiscalar trace processors such as 16, 17, 18, 19, 20 and 21 type processors. In implicit multithreading multiple subordinate processes execute different threads under the control of a single sequencer unit whereas multithreaded processors are represented as having a single processing unit with a single or multiple issue pipeline that processes instructions of different threads concurrently (Ungerer, Robic, Silc, 2002) .

Explicit Multithreading

A CMP also called a multi-processor chip integrates more than one chip into a single processor and as such every unit of a processor is duplicated and used independently of its copies on the chip. A direct contrast to this is the multithreaded processor that interleaves execution of instructions of different control threads in the same pipeline. As such multiple program counters are there in the fetch and multiple contexts are found on different register sets on the same chip. The execution units are multiplexed between thread contexts loaded on the register sets (Ungerer, Robic, Silc, 2002) .

Latencies arising out of computation of a single instruction stream are filled by instructions from another thread, this is in direct contrast to RISC (Reduced Instruction Set Computing) based processors or superscalar processors of today that use time consuming operating system based thread switch. If there are memory latencies, multithreaded processors tolerate such events by overlapping the long latency operations of one thread with the execution of other threads (Ungerer, Robic, Silc, 2002) .

Based on the processor design either a single-issue instruction pipeline is used or instructions from different instruction streams are issued one after the other. These are what are known as SMT processors that we discussed earlier. SMT processors combine multithreading technique with superscalar processors so that there is full use of the bandwidth by issuing instructions from different threads one after the other (Ungerer, Robic, Silc, 2002) . 

Explicit multithreaded processors are in direct contrast to implicit multithreaded processors in the sense that they are able to increase the performance of multiprogramming workload and their single-threaded performance might decrease when compared to a single-threaded processor. Explicit multithreaded processors have low execution times in the event of multithreaded workload whereas on the contrary superscalar or implicit multithreaded processors have low execution times for a single program (Ungerer, Robic, Silc, 2002) .

What is CPU utilization?

The time that the CPU utilizes for processing instructions of a computer program in comparison to doing input/output or I/O operations is known as CPU time or CPU utilization time.

CPU utilization refers to the level of CPU throughput and the monitoring of CPUs would show the workload of a selected physical processor for physical computers and that of virtual processors for virtual machines. If CPU utilization passes acceptable thresholds then it should trigger alerts that an administrator can solve before any outage happens and CPU utilization tools track CPU data and store the data in a central location (SolarWinds IT Management Glossary, 2010). CPU utilization is the most widely measured statistic in physical and virtual computers.

There are several methods to track CPU utilization and to arrive at an average. From the point of view of computer servers using dual core and multi-core CPUs can provide a lot of benefits. Threads can be allocated to specific CPUs to improve performance and to reduce switching. Also, the type of CPU can also affect performance such as AMD and Intel, which place virtualization extensions directly on the CPU. These types of extensions that are hardware-assisted virtualization are common on new servers and desktops (Desai, 2007).

Measuring CPU utilization is an important factor both on virtual and physical computers and as such for virtual machines, administrators can balance resource allocation based on several points. Some operating systems allow full or partial dedication of a CPU to a particular virtual machine and this reduces overheads caused by the heavy switching of processors on used systems. Priorities can be set for each virtual machine and when there is maximum utilization of the physical server, virtual machines will be given processing time on the server based on their level of importance. Another technique that can be used for effective CPU utilization is the placement of restrictions on the minimum and maximum amount of CPU resources that can be used by a specific virtual machine or workload and this would help avoid issues wherein a failed guest operating system monopolizes host resources (Desai, 2007).

How does multithreading affect CPU utilization?

Multithreading a processor improves the effective utilization of the processor. Multithreaded processors switch to a new thread and perform some useful work when other threads wait for some memory response or some synchronization signals. Multithreading usually interleaves instructions from different processes and applies the same term to systems that interleave blocks of instructions from different processes as well. These kind of cycle-by-cycle interleaved processors are called finely multi-threaded processes and the rest are known as coarsely multithreaded or block multithreaded processors.

Multithreaded processors are kept constantly engaged by the multiple threads or programs that they process. Multithreaded processors are never idle just waiting for one thread to complete its fetch from the memory or some other synchronization to begin processing that thread, instead when the thread is waiting due to some fetch instruction or some synchronization, the other threads in the queue that awaite execution are executed and thus the CPU never remains idle or underutilized. This way the multithreaded CPUs are constantly engaged and are never idle.

Is there optimum utilization of CPU resources in a multi-threaded environment?

There is optimum utilization of CPU resources because all those resources that are not utilized by one thread are utilized by other threads awaiting execution. As such CPU resources are not wasted and there is effective CPU utilization in a multithreaded environment.

Conclusion

In today's world where multitasking is more of a necessity than a fashion, it has become imperative that computers to do multiple tasks at the same time and at greater speeds, as a consequence multithreading came into existence in the computer world. Today's computers are much faster than the computers of old because of multithreading and effective CPU utilization. In the future computers would be rated based on their ability to multitask and run multiple threads at the same time while at the same time ensuring effective CPU utilization.

 

Calculate the Price of Your Paper

 
300 words
-+
 

Related essays

  1. Using Information Technology to Improve Patient Safety and Quality of Nursing Care
  2. Riding around Denver
  3. Technological Solutions
  4. Computer Science Department Essay
Discount applied successfully