Results 181 to 190 of about 54,414 (244)
Some of the next articles are maybe not open access.
Journal of Network and Computer Applications, 2019
Apache Hadoop framework supports the storing and processing of big data datasets using simple programming models. Energy management has been recognized as one of the major issues in Hadoop, and many types of research have been conducted in this scope ...
Fatemeh Shabestari +2 more
exaly +2 more sources
Apache Hadoop framework supports the storing and processing of big data datasets using simple programming models. Energy management has been recognized as one of the major issues in Hadoop, and many types of research have been conducted in this scope ...
Fatemeh Shabestari +2 more
exaly +2 more sources
IEEE Transactions on Cloud Computing, 2020
MapReduce is a crucial framework in the cloud computing architecture, and is implemented by Apache Hadoop and other cloud computing platforms. The resources required for executing jobs in a large data center vary according to the job types.
C. Chen +4 more
semanticscholar +1 more source
MapReduce is a crucial framework in the cloud computing architecture, and is implemented by Apache Hadoop and other cloud computing platforms. The resources required for executing jobs in a large data center vary according to the job types.
C. Chen +4 more
semanticscholar +1 more source
International Journal of Cloud Applications and Computing, 2011
As a main subfield of cloud computing applications, internet services require large-scale data computing. Their workloads can be divided into two classes: customer-facing query-processing interactive tasks that serve hundreds of millions of users within a short response time and backend data analysis batch tasks that involve petabytes of data.
Zhiwei Xu, Bo Yan, Yongqiang Zou
openaire +1 more source
As a main subfield of cloud computing applications, internet services require large-scale data computing. Their workloads can be divided into two classes: customer-facing query-processing interactive tasks that serve hundreds of millions of users within a short response time and backend data analysis batch tasks that involve petabytes of data.
Zhiwei Xu, Bo Yan, Yongqiang Zou
openaire +1 more source
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 2013
From it's beginnings as a framework for building web crawlers for small-scale search engines to being one of the most promising technologies for building datacenter-scale distributed computing and storage platforms, Apache Hadoop has come far in the last seven years.
openaire +1 more source
From it's beginnings as a framework for building web crawlers for small-scale search engines to being one of the most promising technologies for building datacenter-scale distributed computing and storage platforms, Apache Hadoop has come far in the last seven years.
openaire +1 more source
Proceedings of the International Conference on Research in Adaptive and Convergent Systems, 2016
Hadoop is a widely used software framework for handling massive data. As heterogeneous computing gains its momentum, variants of Hadoop have been developed to offload the computation of the Hadoop applications onto the heterogeneous processors, such as GPUs, DSPs, and FPGA. Unfortunately, these variants do not support on-demand resource scaling for the
Yi-Wei Chen +3 more
openaire +1 more source
Hadoop is a widely used software framework for handling massive data. As heterogeneous computing gains its momentum, variants of Hadoop have been developed to offload the computation of the Hadoop applications onto the heterogeneous processors, such as GPUs, DSPs, and FPGA. Unfortunately, these variants do not support on-demand resource scaling for the
Yi-Wei Chen +3 more
openaire +1 more source
Proceedings of the 8th ACM International Conference on Computing Frontiers, 2011
The information-technology platform is being radically transformed with the widespread adoption of the cloud computing model supported by data centers containing large numbers of multicore servers. While cloud computing platforms can potentially enable a rich variety of distributed applications, the need to exploit multiscale parallelism at the inter ...
Riyaz Haque +2 more
openaire +1 more source
The information-technology platform is being radically transformed with the widespread adoption of the cloud computing model supported by data centers containing large numbers of multicore servers. While cloud computing platforms can potentially enable a rich variety of distributed applications, the need to exploit multiscale parallelism at the inter ...
Riyaz Haque +2 more
openaire +1 more source
Proceedings of the 4th Workshop on Workflows in Support of Large-Scale Science, 2009
MapReduce provides a parallel and scalable programming model for data-intensive business and scientific applications. MapReduce and its de facto open source project, called Hadoop, support parallel processing on large datasets with capabilities including automatic data partitioning and distribution, load balancing, and fault tolerance management ...
Jianwu Wang +2 more
openaire +1 more source
MapReduce provides a parallel and scalable programming model for data-intensive business and scientific applications. MapReduce and its de facto open source project, called Hadoop, support parallel processing on large datasets with capabilities including automatic data partitioning and distribution, load balancing, and fault tolerance management ...
Jianwu Wang +2 more
openaire +1 more source
Energy-Efficient Task Scheduling for CPU-Intensive Streaming Jobs on Hadoop
IEEE Transactions on Parallel and Distributed Systems, 2019Hadoop, especially Hadoop 2.0, has been a dominant framework for real-time big data processing. However, Hadoop is not optimized for energy efficiency.
Peiquan Jin +3 more
semanticscholar +1 more source

