Task-based parallelism (TBP) is the breaking of a computation into small pieces (tasks), then assigning the tasks to processors (CPUs) that are running in parallel. It is a technique for organizing computations for efficient handling by supercomputers with many processors, and especially desirable for long calculations, e.g., days or weeks: given the rarity of supercomputers that perform calculations for many scientists, efficient use of its processors for each calculation results in more science accomplished.
The computation tasks invariably require data from tasks carried out earlier in the computation (often more than one), and produce data required by later tasks that combine (or summarize) the data. Thus the tasks of a computation or simulation are not completely free to run at any time in the computation: some ordering is necessary. The challenge is organizing the computation into tasks that can be managed without too much overhead (which also requires computation) and the data brought to the processor where it is needed such that there is little idle CPU awaiting the resources necessary to carry out tasks. A computation is well suited to the technique if tasks can be conceived that are limited in calculation time (i.e., not many orders-of-magnitude difference; not including loops that vary greatly in number of interactions), and the data required for each task is manageably small, i.e., from a few earlier tasks rather than all or a huge number. With this, a scheduler has a good chance of keeping the CPUs busy and completing the computation efficiently. This of enough value that programmers go to great lengths devising ways to organize computations appropriately and effectively.