A new Algorithm - Divide and conquer approach to increase Computational Efficiency

Mrs. G. Malathi RajSuresh

In computer science, often the question is not how to solve a problem, but how to solve a problem well. For instance, take the problem of sorting. Many sorting algorithms are well-known; the problem is not to find a way to sort data, but to find a way to efficiently sort data. A sorting algorithm is proposed which has used reduction in strength, Code Jamming that optimizes a code. It also produces a sorted data in N/2 outer loop.  The above method suggested can be considered under the DIVIDE AND CONQUER TECHNIQUE  of problem solving.  There are many techniques available for sorting the data. All sorting algorithms come under any one of the techniques. Brute Force method of solving and the input data is also possible provided with fast computers. Here comes the need to sort the data with available resources in a more efficient manner. This can be done by reducing the time and space complexity there by increasing the efficiency of the algorithm.


The ability to analyze a piece of code or an algorithm and understand its efficiency is vital for understanding computer science. One approach to determining an algorithm's order is to start out assuming an order of O(1), an algorithm that doesn't do anything and immediately terminates no matter what the input. Then, find the section of code that you expect to have the highest order. From there, work out the algorithmic efficiency from the outside in figure out the efficiency of the outer loop or recursive portion of the code, then find the efficiency of the inner code; the total efficiency is the efficiency
of each layer of code multiplied together.


An algorithm that runs the same no matter what the input. For instance, an algorithm that always returns the same value regardless of input could be considered an algorithm of efficiency O(1).

Algorithms based on binary trees are often O(logn). This is because a perfectly balanced binary search tree has logn layers, and to search for any element in a binary search tree requires traversing a single node on each layer. The binary search algorithm is another example of a O(log n) algorithm. In a binary search, one is searching through an ordered array and, beginning in the middle of the remaining space to be searched, whether to search the top or the bottom half. You can divide the array in half only logn times before you reach a single element, which is the element being searched for, assuming it is in the array.

Algorithms of efficiency n require only one pass over an entire input. For instance, a linear search algorithm, which searches an array by checking each element in turn, is O(n). Often, accessing an element in a linked list is O(n) because linked lists do not support random access.

Often, good sorting algorithms are roughly O(nlogn). An example of an algorithm with this efficiency is merge sort, which breaks up an array into two halves, sorts those two halves by recursively calling itself on them, and then merging the result back into a single array. Because it splits the array in half each time, the outer loop has an efficiency of logn, and for each "level" of the array that has been split up (when the array is in two halves, then in quarters, and so forth), it will have to merge together all of the elements, an operations that has order of n.

A fairly reasonable efficiency, still in the polynomial time range, the typical examples for this order come from sorting algorithms, such as the selection sort.

The most important non-polynomial efficiency is this exponential time increase. Many important problems can only be solved by algorithms with this (or worse) efficiency. One example is factoring large numbers expressed in binary; the only known way is by trial and error, and a naive approach would involve dividing every number less than the number being factored into that number until one divided in evenly. For every increase of a single digit, it would require twice as many tests.


Sorting algorithms are usually judged by their efficiency. In this case, efficiency refers to the algorithmic efficiency as the size of the input grows large and is generally based on the number of elements to sort. Most of the algorithms in use have an algorithmic efficiency of either O(n^2) or O(n*log(n)). A few special case algorithms can sort certain data sets faster than O(n*log(n)). These algorithms are not based on comparing the items being sorted and rely on tricks. It has been shown that no key-comparison algorithm can perform better than O(n*log(n)). Many algorithms that have the same efficiency do not have the same speed on the same input. First, algorithms must be judged based on their average case, best case, and worst case efficiency. Some algorithms, such as quick sort, perform exceptionally well for some inputs, but horribly for others. Other algorithms, such as merge sort, are unaffected by the order of input data. Even a modified version of bubble sort can finish in O(n) for the most favorable  inputs.  A second factor is the "constant term". As big-O notation abstracts away many of the details of a process, it is quite useful for looking at the big picture. But one thing that gets dropped out is the constant in front of the expression: for instance, O(c*n) is just O(n). In the real world, the constant, c, will vary across different algorithms. A well-implemented quicksort should have a much smaller constant multiplier than heap sort.  A second criterion for judging algorithms is their space requirement and additional space requirements. Some algorithms never require extra space, whereas some are most easily understood when implemented with extra. Space requirements may even depend on the data structure . A third criterion is stability -- does the sort preserve the order of keys with equal values


All of the sorting algorithms compared the datas and swapped the items. Selection Sort is considered to be one of the traditional algorithm. It uses two loops one for the outer loop to keep track of passes. The inner loop to identify the smallest data and to swap with the first data, second data and so on. The outer loop executes for n-1 passes. The inner loop again for n-1 passes. Here the time is spent only identifying the smaller elements and swapping only one element at a time. It resulted in the time complexity of:

Number of Passes X Number of Comparisons=(n-1)X(n-1)=n^2 - 2n.

Since 2n is negligible when compared to n^2 we say that it is of the order O(n^2).

Similarly the variation of Selection Sort called as Bingo Sort identifies only the largest data and to swap with the first data, second data and so on. Though Bingo Sort works better than selection sort there is no change in the order of the algorithm. It has the time complexity of  only O(n^2).


Dividing a problem  into two or more smaller instances. Each of these smaller instances is recursively solved, and the solutions are combined to produce a solution for the original instance.


In the above said algorithms only one element is identified and located in the correct slot. The proposed algorithm utilizes to place two elements in its correct location in one scan itself.  It does not involve swapping of many elements. It maintains minindex to keep track of minimum element and maxindex to keep track of the maximum element. It then places the minimum data in 1st position and maximum data in the last position. Then the next pass only starts from 2nd and ends in n-1 position. Thus the number of passes it uses to sort n datas reduces to n/2 in the outer loop. It uses most of the optimization techniques in compiler like code jamming and reduction in strength. Thus the complexity here works out to be (n/2)* n= (n^2)/2. This is the MinMax sort algorithm which is of the order of O((n^2)/2).   Here,n is number of data, pass is to go to the next round, i is to inner loop to compare or get the maximum valued data which is stored as index in a variable called maxindex. The minimum valued data which is stored as index in a variable called minindex. Count is a variable that is used to keep track of the outer loop. This algorithm compares from first data to the last data and gets the maximum value and swaps with the first index position. In the same scan it gets the maximum data and swaps with the last location.  In the next scan it leaves this first position and checks for the next minimum value and leaves the last position to get the next maximum value that is why we use n,n-2, n-4,n-6….1.

The clock speed is obtained for sorting 1000 datum, 30000 datum, 50000 datum and finally 1,00,000 datum. The below table and chart  suggests that the selection sort and bingo sort works with more or less same time complexity.  The recursive quick sort consumes much of the CPU time. The proposed Minmax sort algorithm organizes the data more efficiently and effectively.  x- axis denotes number of data passed and y-axis denotes computational time.



of inputs(in thousands)

Minmax sort
(micro Secs)

(Micro secs)

(micro Secs)

(micro sec)





















Very large Values



























The order of the Selection Sort algorithm is O(n^2), Bingo Sort algorithm is also O(n^2). The order of the MinMax Sort Algorithm is O((n^2) / 2). This algorithm also differs from other algorithm by reducing the number of swappings. It does not swap until the correct data is found. It uses only one temporary variable. That too is reused for storing the maximum valued data. Thus through comparison with other sorting algorithms we can say that this algorithm excels other algorithms excepts algorithm with the order of O(nlogn).



This “MinMax” sort algorithm if further enhanced it is expected to outbeat the quickest and the fastest sort algorithm called as Quick Sort. This MinMax sort alogoithm can be  optimized to provide less order than O(nlogn) algorithm. The  algorithm was generated in “C” language and tested by passing many test cases. It produced n/2 passes in all these test cases.  Thus the order of the MinMax Sort Algorithm is O((n^2) / 2).

About Authors

Mrs. G. Malathi RajSuresh, MCA., Mphil., (PhD) working as Sr. Lecturer, Dept of MCA, Velammal Engineering College