Video data include a significant amount of useful information that can be exploited by different organizations to gain more insights into running their business. The data are large and growing at a high rate because cameras, nowadays, are installed everywhere, gathering information all the time. Therefore, the data need large storage to be preserved and large computing power to be processed. Different technologies such as Apache Spark, Apache Storm, and Apache Hadoop have been widely used to perform big data processing on computer clusters. This thesis introduces general solutions and algorithms that can be used with different technologies, such as Apache Spark, Apache Storm, and Apache Hadoop, to improve the performance of processing big video data on computer clusters. However, the thesis focuses on using Apache Hadoop to provide an empirical evaluation of the proposed algorithms. Apache Hadoop is selected since it has been designed to work on commodity hardware. This thesis has been investigating different approaches to improve the performance of video processing on Hadoop clusters. These approaches are based on using the Hadoop MapReduce programming model to distribute the processing of big video data, and different sampling methods to avoid unnecessary computation while processing the video data. Change detection is used to detect the changes based on which the proposed algorithms sample the frames. The proposed algorithms can support video processing on faulty systems as well. The thesis also proposes three novel data placement policies to improve the performance of MapReduce-based video processing.