Parallel hardware refers to digital hardware that is able to perform computations in parallel. They are especially prevalent in modern hardware, especially due to the power limitations of modern transistors.
Hardware terminology
Cores are physical mini-processors that contain their own control unit and ALU. Many processors contain multiple cores to work on multiple instructions simultaneously. A thread is a virtual unit that represents a single sequence of instructions processed by the core. Multiple threads can be created and managed by the operating system to share the resources of a single core — multiple threads in a single core can work on multiple tasks concurrently and are generally more efficient to manage.
While traditional multicycle CPUs run instructions in a straight-line manner, modern CPUs can execute instructions simultaneously over less clock cycles and execute in a non-linear way. They do this in different ways, including with pipelines.
Types of parallelisation
There’s two main types of application-level parallelisation:
- Data-level parallelisation (DLP) premises that many data items can be operated on at the same time.
- Task-level parallelisation (TLP) premises that many tasks can be done independently and largely in parallel.
From here, hardware implements these two in a few different ways:
- Instruction-level parallelism exploits DLP via things like pipelining and speculative execution.
- Vector architectures, GPUs, and multimedia ISCs exploit DLP by applying a single instruction to a collection of data in parallel.
- Thread-level parallelism exploits either DLP/TLP with a hardware model that allows parallel threads to interact.
- Request-level parallelism has user/OS-specified tasks that are parallelised.