Color and Texture Feature Extraction Using Apache Hadoop Framework Multimedia data is expanding exponentially. The rapid growth of technology combined with affordable storage and capabilities has lead to explosion in the availability and applications of multimedia. Most of the data is available in the form of images and videos. Today large amount of image data is produced through digital cameras, mobile phones and other sources. Processing of this large collection of images involve highly complex and repetitive operations on a large database leading to challenges of optimizing the query time and data storage capacity. Many image processing and computer vision algorithms are applicable to large-scale data tasks. It is often desirable to run the image processing algorithms on large data sets (e.g. larger than 1 TB) that are currently limited by the computational power of a single computer system. In order to handle such a huge data, we propose execution of time and space intensive computer vision algorithms on a distributed computing platform by using Apache Hadoopframework. Basically, Hadoop framework works based on divide and conquer strategy. The task of extracting color and texture features will be divided and assigned to multiple nodes of the Hadoopcluster. A significant speedup in computation time and efficient utilizations of memory can be achieved by exploiting the parallelism nature of Apache Hadoop framework. The Most important advantage of using Hadoop is, it is highly economical as whole framework can be implemented on existing commodity machines. Moreover, the system is highly fault tolerant and less vulnerable to node failures.