Results 251 to 260 of about 1,615,318 (318)
Some of the next articles are maybe not open access.

Energy-Latency Tradeoff for Energy-Aware Offloading in Mobile Edge Computing Networks

IEEE Internet of Things Journal, 2018
Mobile edge computing (MEC) brings computation capacity to the edge of mobile networks in close proximity to smart mobile devices (SMDs) and contributes to energy saving compared with local computing, but resulting in increased network load and ...
Jiao Zhang, Xiping Hu
exaly   +2 more sources

Good-case Latency of Byzantine Broadcast: a Complete Categorization

ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, 2021
This paper explores the good-case latency of Byzantine fault-tolerant broadcast, motivated by the real-world latency and performance of practical state machine replication protocols.
Ittai Abraham   +3 more
semanticscholar   +1 more source

Broadband Analog Aggregation for Low-Latency Federated Edge Learning

IEEE Transactions on Wireless Communications, 2018
To leverage rich data distributed at the network edge, a new machine-learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing intelligent services to mobile users.
Guangxu Zhu, Yong Wang, Kaibin Huang
semanticscholar   +1 more source

Collaborative Cloud and Edge Computing for Latency Minimization

IEEE Transactions on Vehicular Technology, 2019
By performing data processing at the network edge, mobile edge computing can effectively overcome the deficiencies of network congestion and long latency in cloud computing systems.
Jinke Ren   +3 more
semanticscholar   +1 more source

Latency Minimization for D2D-Enabled Partial Computation Offloading in Mobile Edge Computing

IEEE Transactions on Vehicular Technology, 2020
We consider Device-to-Device (D2D)-enabled mobile edge computing offloading scenario, where a device can partially offload its computation task to the edge server or exploit the computation resources of proximal devices.
Umber Saleem   +4 more
semanticscholar   +1 more source

Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve

USENIX Symposium on Operating Systems Design and Implementation
Each LLM serving request goes through two phases. The first is prefill which processes the entire input prompt and produces the first output token and the second is decode which generates the rest of output tokens, one-at-a-time.
Amey Agrawal   +7 more
semanticscholar   +1 more source

Microbial Latency

Clinical Infectious Diseases, 1984
The means by which pathogens suppress, subvert, or elude host defenses and establish latent infections include microbially induced immunosuppression or antigenic variation, gaining access to sites of the body that are inaccessible to the immune system, and manipulating of the immune response to the advantage of the pathogen. Various risk factors of the
openaire   +2 more sources

Delay is Not an Option: Low Latency Routing in Space

ACM Workshop on Hot Topics in Networks, 2018
SpaceX has filed plans with the US Federal Communications Committee (FCC) to build a constellation of 4,425 low Earth orbit communication satellites. It will use phased array antennas for up and downlinks and laser communication between satellites to ...
M. Handley
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy