Please use this identifier to cite or link to this item:
|Pipeline-Based Linear Scheduling of Big Data Streams in the Cloud
|FRASCATI::Engineering and technology::Electrical engineering, Electronic engineering, Information engineering
FRASCATI::Natural sciences::Computer and information sciences
|Nowadays, there is an accelerating need to efﬁciently and timely handle large amounts of data that arrives continuously. Streams of big data led to the emergence of several Distributed Stream Processing Systems (DSPS) that assign processing tasks to the available resources (dynamically or not) and route streaming data between them. Efﬁcient scheduling of processing tasks can reduce application latencies and eliminate network congestions. However, the available DSPSs’ in-built scheduling techniques are far from optimal. In this work, we extend our previous work, where we proposed a linear scheme for the task allocation and scheduling problem. Our scheme takes advantage of pipelines to handle efﬁciently applications, where there is need for heavy communication (all-to-all) between tasks assigned to pairs of components. In this work, we prove that our scheme is periodic, we provide a communication reﬁnement algorithm and a mechanism to handle many-to-one assignments efﬁciently. For concreteness, our work is illustrated based on Apache Storm semantics. The performance evaluation depicts that our algorithm achieves load balance and constraints the required buffer space. For throughput testing, we compared our work to the default Storm scheduler, as well as to R-Storm. Our scheme was found to outperform both the other strategies and achieved an average of 25%-40% improvement compared to Storm’s default scheduler under different scenarios, mainly as a result of reduced buffering (≈ 45% less memory). Compared to R-storm, the results indicate an average of 35%-45% improvement.
|Appears in Collections:
|Department of Applied Informatics
Files in This Item:
|Final version, open access journal
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.