Results 261 to 270 of about 444,586 (306)
Some of the next articles are maybe not open access.

Modeling distributed file systems

ACM SIGMETRICS Performance Evaluation Review, 1992
This paper describes different methods and techniques used to model, analyze, evaluate and implement distributed file systems. Distributed file systems are characterized by the distributed system hardware and software architecture, in which they are implemented as well as by the file systems' functions.
openaire   +1 more source

Distributed File System

2016
The main objective of this chapter is to provide information and guidance for building a Hadoop distributed file system to address the big data classification problem. This system can help one to implement, test, and evaluate various machine-learning techniques presented in this book for learning purposes.
openaire   +1 more source

Rectifying corrupted files in distributed file systems

[1991] Proceedings. 11th International Conference on Distributed Computing Systems, 2002
A probabilistic comparison algorithm is presented which requires O(f log n) bits to be transmitted to identify the corrupt pages in a file (where n is the number of pages and f is the maximum number of pages that could be corrupted), which improves on previous results on the growth of communicated bits as functions of both n and of f.
S. Rangarajan, D. Fussell
openaire   +1 more source

The Hadoop Distributed File System

2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), 2010
The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can
Konstantin Shvachko   +3 more
openaire   +1 more source

File allocation in distributed systems

Proceedings of the 1976 ACM SIGMETRICS conference on Computer performance modeling measurement and evaluation - SIGMETRICS '76, 1976
The problem of allocating files in a computer network is a complex combinatorial problem due to the number of integer design parameters involved. These parameters include system cost, number of copies of each file to be stored, and sites at which the copies should be stored. The tradeoffs between these parameters are discussed.
Chandy, K. M., Hewes, J. E.
openaire   +2 more sources

A distributed hypercube file system

Proceedings of the third conference on Hypercube concurrent computers and applications -, 1988
For the hypercube, an autonomous physically interconnected file system is proposed. The resulting distributed file system consists of an I/O organization and a software interface. The system is loosely-coupled architecturally but from operating systems point of view a tightly-coupled system is formed in which interprocessor messages are handled ...
R. J. Flynn, H. Hadimioglu
openaire   +1 more source

The ITC distributed file system

ACM SIGOPS Operating Systems Review, 1985
This paper presents the design and rationale of a distributed file system for a network of more than 5000 personal computer workstations. While scale has been the dominant design influence, careful attention has also been paid to the goals of location transparency, user mobility and compatibility with existing operating system interfaces.
M. Satyanarayanan   +5 more
openaire   +1 more source

Toward massive distributed file systems

[1992] Proceedings Third Workshop on Workstation Operating Systems, 2003
Massive scale in distributed file systems remains an elusive goal. Existing systems are generally limited to only a few hundred clients or make restrictive assumptions, such as that widely shared files are read-only. It is argued that a truly massive system must scale for all kinds of files; file access traces suggest that occasionally written files ...
M. Blaze, R. Alonso
openaire   +1 more source

Secure cloud distributed file system

2016 11th International Conference for Internet Technology and Secured Transactions (ICITST), 2016
Traditional methods of securing data are challenged by specific nature and architecture of cloud. With increasing sophistication of cyber attackers and advancement of cryptanalysis techniques, encryption alone is not sufficient to ensure data security. A more adaptive and flexible approach to data security is thus required.
Kheng Kok Mar   +3 more
openaire   +1 more source

The pulse distributed file system

Software: Practice and Experience, 1985
AbstractA network of powerful personal computers, linked by a high‐speed local area network, is being seen increasingly as an alternative to a traditional centralized time‐sharing operating system. The PULSE project is investigating how such a system may be constructed to give the benefits of a self‐sufficient personal computer to each user without ...
G. M. Tomlinson   +3 more
openaire   +1 more source

Home - About - Disclaimer - Privacy