8000 GitHub - cloudera/llama: Llama - Low Latency Application MAster
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

cloudera/llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Llama ${project.version}

Llama is a Yarn Application Master that mediates the management and monitoring
of cluster resources between Impala and Yarn.
  
Llama provides a Thrift API for Impala to request and release allocations 
outside of Yarn-managed container processes.

For details on how to build Llama refer to the BUILDING.txt file.

For details on how to use Llama please refer to Llama documentation.
0