The big data software stack based on Apache Spark and Hadoop has become mission critical in many enterprises. Performance of Spark and Hadoop jobs depends on a large number of configuration settings. Manual tuning is expensive and brittle. There have been prior efforts to develop on-line and off-line automatic tuning approaches to make the big data stack less dependent on manual tuning. These, however, demonstrated only modest performance improvements with very simple, single-user workloads on small data sets. In the more traditional "small data" space, research into autonomic systems development produced workload prediction and re-configuration techniques based on past workload history analysis. These have only limited applicability in the big data space, where workloads tend to be less repetitive. This thesis presents KERMIT - the autonomic architecture for big data capable of automatically tuning Apache Spark and Hadoop on-line, and achieving performance results 30% faster than rule-of-thumb tuning by a human administrator and 92% as fast as the fastest possible tuning established by performing an exhaustive search of the tuning parameter space. KERMIT can detect important workload changes with 99% accuracy, and predict future workload types with 96% accuracy. It is capable of identifying and classifying complex multi-user workloads without being explicitly trained on examples of these workloads. It does not rely on the past workload history to predict the future workload classes and their associated performance. KERMIT can identify and learn new workload classes, and adapt to workload drift, without human intervention. This thesis presents thee new machine learning algorithms - a low-overhead on-line search algorithm, a statistical ensemble algorithm for real-time change detection, and a new, advanced, zero-shot machine learning algorithm for identification of unseen hybrid classes.