although there is no universally agreed-on wording to describe this notion, there is general agreement about what the concept means:
an algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time.
we can consider algorithms to be procedural solutions to problems.
an input to an algorithm specifies an instance of the problem the algorithm solves. it is very important to specify exactly the set of instances the algorithm needs to handle. (as an example, recall the variations in the set of instances for the three greatest common divisor algorithms discussed in the previous section.) if you fail to do this, your algorithm may work correctly for a majority of inputs but crash on some “boundary” value. remember that a correct algorithm is not one that works most of the time, but one that works correctly for all legitimate inputs.