Data Structure and Algorithm (DSA) 

There are basically two aspects of computer programming. One is the data organization i.e. the data and structure to represent the data of the problem in hand, and is the subject of present text. The other one involves choosing the appropriate algorithm to solve the problem in hand. Data structure and algorithm designing, both involved with each other.

As an algorithm is a sequence of steps to solve a problem, there may be more than one algorithm to solve a problem.

The choice of particular algorithm depends upon the following considerations:

  1. Performance required e., time complexity.
  2. Memory requirement e., space complexity.
  3. Programming requirements.

We can directly consider only time complexity and space complexity directly and programming requirements differ from language to language.

  1. Space Complexity

Whenever a solution to a problem is written some memory is required to complete. For any algorithm memory may be used for the following:

  1. Variables (This include the constant values, temporary values)
  2. Program Instruction
  3. Execution

Space complexity is the amount of memory used by the algorithm (including the input values to the algorithm) to execute and produce the result.

Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the extra space or the temporary space used by the algorithm during its execution.

Space Complexity = Auxiliary Space + Input space

Memory Usage while Execution

  • Instruction space: Space needed to store the executable version of the program and it is fixed.
  • Data space: Space needed to store all constants; variable values and has further following components:
    • Space needed by constants and simple This space is fixed.
    • Space needed by fixed size structured variable, such as array and structure.
    • Dynamically allocated space. This space usually varies.

But while calculating the Space Complexity of any algorithm, we usually consider only Data Space and we neglect the Instruction Space and Environmental Stack.

In general, the total space needed by the program can be divided into two parts.

  1. A fixed part that is independent of particular problems, and includes instruction space, space for constants, simple variables, and fixed sized structured variables.
  2. A variable part that includes structured variable whose size depends on the particular problem being solved dynamically allocated space and he recursion stack space.

Calculating the Space Complexity

For calculating the space complexity, we need to know the value of memory used by different type of datatype variables, which generally varies for different operating systems, but the method for calculating the space complexity remains the same.



bool, char, unsigned char, signed char,     int8

1 byte

    int16, short, unsigned short, wchar_t,     wchar_t

2 bytes

float,  int32, int, unsigned int, long, unsigned long

4 bytes

double,     int64, long double, long long

8 bytes

2. Time Complexity

The time complexity of an algorithm is the amount of time it needs to run a completion. In computer programming the time complexity any program or any code quantifies the amount of time taken by a program to run.

The time complexity is define using some of notations like Big O notations, which excludes coefficients and lower order terms.

The time complexity is said to be described asymptotically, when we describe it in this way i.e., as the input size goes to infinity.

For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n for any n (bigger than some n0), the asymptotic time complexity is O(n3).

It is commonly calculated by calculating the number of instructions executed by the program or the algorithm, where an elementary operation takes a fixed amount of time to perform.

In general, the analysis of algorithm is achieved in two steps:

 1. A prior analysis:

  1.  Presumes the assessment from temporal point of view of the used operations and their relative cost.
  2. In a prior analysis, the result is a function which bound’s the algorithm’s computing time

2. A posteriori testing supposes the following steps:

  1.  Establishing a convenient number of sets of input data, which presume the behavior possibilities of the algorithm.
  2. Executing the algorithm for each input set and collecting actual stats about algorithm’s consumption of time and space while it is executing.
  3. Build the algorithm’s profile the precise amount of time and storage the algorithm  consumes.
  4. A posteriori test has an objective to determine the algorithm’s profile.

Types of Notations for Time Complexity

 Now we will discuss and understand the various notations used for Time Complexity.

  1. Big Oh denotes "fewer than or the same as" <expression>iterations.
  2. Big Omega denotes "more than or the same as" <expression> iterations.
  3. Big Theta denotes "the same as" <expression>iterations.
  4. Little Oh denotes "fewer than" <expression>iterations.
  5. Little Omega denotes "more than" <expression> iterations.

1.  Big Oh-notation:

To denote asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n))     (pronounced     “big-oh     of      g      of      n”)      the      set      of      functions: O(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c*g(n) for all n ≥ n0 }

 The general step wise procedure for Big-O runtime analysis is as follows:

  1. Figure out what the input is and what n reperesents. 
  2. Express the maximum number of operations, the algorithm performs in terms of n.
  3. Eliminate all excluding the highest order terms.
  4. Remove all the constant factors.

2.  Big Omega-notation:

 To denote asymptotic lower bound, we use -notation. For a given function g(n), we denote by Ω(g(n))     (pronounced     “big-omega     of     g     of     n”)     the     set     of     functions: Ω(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c*g(n) for all n ≥ n0 }

3. Big Theta-notation:

 To denote asymptotic tight bound, we use -notation. For a given function g(n), we denote by ⱺ(g(n))     (pronounced     “big-theta     of     g     of      n”)      the      set      of      functions: ⱺ(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c*g(n) for all n ≥ n0 }