Timely and cost-effective analytics over “Big Data” are now a key ingredient for success in many businesses, scientific and engineering disciplines, and government endeavors. The process of adjusting settings to record for memory, cores, and instances used by the system is termed tuning. Job Optimization process guarantees optimal performance and prevents resource bottlenecking. Effective changes are made to each property and settings, to ensure the correct usage of resources based on system specific setup. Several parameters along with different scheduling mechanisms have impact on the performance of the metrics. Our belief is that these parameters have to be efficiently measured for individual tasks and mapped with the scheduling policies for maximizing the performance.
Jumbune's 'Job Optimization' is a proprietary framework with an in-built cost based optimization algorithm that assists in development and tuning of the applications running on top of enterprise hadoop clusters. Recommends optimal configuration based on application, resource and cluster. It performs the balancing of cluster load and application behaviour together which results in fine tuning of the application's lifetime. It orchestrates the life-cycle of an application subject to actual workload, I/O, data size, behaviour until it finds optimal parameters that can be applied to it. The time bound optimization helps administrators, who are on strict deadlines to use the optimization framework in a fixed time frame. This makes sure that the optimization is performed during off-peak hours and it doesn't interfere with the normal job execution schedule on the cluster.