Results 191 to 200 of about 22,565 (236)
Some of the next articles are maybe not open access.
HDFS+: Concurrent Writes Improvements for HDFS
2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, 2013HDFS is a popular distributed file system which provides high scalability and throughput. It lacks built-in support for multi-source data generating, which arise naturally in many applications including log mining, data analysis etc. There needs a data collection step before analysis in basic HDFS environment because of many data are in local disk ...
null Kun Lu +2 more
openaire +1 more source
Proceedings of the 23rd international symposium on High-performance parallel and distributed computing, 2014
In this paper, we propose SOR-HDFS, a SEDA (Staged Event-Driven Architecture)-based approach to improve the performance of HDFS Write operation. This design not only incorporates RDMA-based communication over InfiniBand but also maximizes overlapping among different stages of data transfer and I/O.
Nusrat S. Islam +3 more
openaire +1 more source
In this paper, we propose SOR-HDFS, a SEDA (Staged Event-Driven Architecture)-based approach to improve the performance of HDFS Write operation. This design not only incorporates RDMA-based communication over InfiniBand but also maximizes overlapping among different stages of data transfer and I/O.
Nusrat S. Islam +3 more
openaire +1 more source
Proceedings of the 3rd IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, 2016
Given the recent advancement in the ubiquitous positioning technologies, it is now common to query terabytes of spatial data. These massive data are usually geo-distributed across multiple data centers to ensure their availability. Yet, at least one replica of the data is stored close to where the data are generated.
Mariam Malak Fahmy +2 more
openaire +1 more source
Given the recent advancement in the ubiquitous positioning technologies, it is now common to query terabytes of spatial data. These massive data are usually geo-distributed across multiple data centers to ensure their availability. Yet, at least one replica of the data is stored close to where the data are generated.
Mariam Malak Fahmy +2 more
openaire +1 more source
HDF-Net: Capturing Homogeny Difference Features to Localize the Tampered Image
IEEE Transactions on Pattern Analysis and Machine IntelligenceModern image editing software enables anyone to alter the content of an image to deceive the public, which can pose a security hazard to personal privacy and public safety.
Ruidong Han +5 more
semanticscholar +1 more source
2020 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), 2020
The rapidly increasing growth of big data is leading to increased usage of distributed computing and clusters as this can enable tasks to be split and processed in parallel resulting in higher computational efficiency. Environments like Apache Hadoop use Distributed file systems like the HDFS to support large clusters of typically commodity hardware to
Abhishek Das +3 more
openaire +1 more source
The rapidly increasing growth of big data is leading to increased usage of distributed computing and clusters as this can enable tasks to be split and processed in parallel resulting in higher computational efficiency. Environments like Apache Hadoop use Distributed file systems like the HDFS to support large clusters of typically commodity hardware to
Abhishek Das +3 more
openaire +1 more source
2017 International Conference on Inventive Computing and Informatics (ICICI), 2017
Due to high use of online activities, there will be lot of data which is getting generated. To handle this data there should be an efficient system which can process the data effectively. One such system is Hadoop Distributed File System. HDFS consists of number of nodes, and one is the master among them while all others are slave nodes. Master node is
B Purnachandra Rao +1 more
openaire +1 more source
Due to high use of online activities, there will be lot of data which is getting generated. To handle this data there should be an efficient system which can process the data effectively. One such system is Hadoop Distributed File System. HDFS consists of number of nodes, and one is the master among them while all others are slave nodes. Master node is
B Purnachandra Rao +1 more
openaire +1 more source
2023
The HDF4 Reference Manual is a detailed technical document that serves as a comprehensive reference for users working with HDF4 (Hierarchical Data Format version 4). It provides in-depth information about the functions, data structures, and constants used in the HDF4 library.
openaire +1 more source
The HDF4 Reference Manual is a detailed technical document that serves as a comprehensive reference for users working with HDF4 (Hierarchical Data Format version 4). It provides in-depth information about the functions, data structures, and constants used in the HDF4 library.
openaire +1 more source
2023
The HDF4 User's Guide is a comprehensive manual designed to help users understand and work with the HDF4 (Hierarchical Data Format version 4). HDF4 is a widely used file format and set of tools developed by the HDF Group for scientific data management.
openaire +1 more source
The HDF4 User's Guide is a comprehensive manual designed to help users understand and work with the HDF4 (Hierarchical Data Format version 4). HDF4 is a widely used file format and set of tools developed by the HDF Group for scientific data management.
openaire +1 more source
Proceedings of the 17th International Middleware Conference, 2016
Distributed file systems built for Big Data Analytics and cluster file systems built for traditional applications have very different functionality requirements, resulting in separate storage silos. In enterprises, there is often the need to run analytics on data generated by traditional applications that is stored on cluster file systems.
Ramya Raghavendra +2 more
openaire +1 more source
Distributed file systems built for Big Data Analytics and cluster file systems built for traditional applications have very different functionality requirements, resulting in separate storage silos. In enterprises, there is often the need to run analytics on data generated by traditional applications that is stored on cluster file systems.
Ramya Raghavendra +2 more
openaire +1 more source
HDF/HDF-EOS data access, visualization and processing tools at the GES DAAC
IGARSS 2003. 2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No.03CH37477), 2004To help users of remote sensing data, the NASA Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a series of desktop and on-line tools. These are presented in this article. Various HDF readers for AIRS and MODIS data have been written in IDL, C and Fortran.
G. Leptoukh +12 more
openaire +1 more source

