8000 GitHub - justgoodman/hipster-cache
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

justgoodman/hipster-cache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cache Server

Documentation

I complete all code, you can see it in this links:

  1. https://github.com/justgoodman/hipster-cache this is CacheServer (i created my own implementation of HashTable)
  2. https://github.com/justgoodman/hipster-cache-proxy this is ProxyServer ( ProxyServer provides Sharding using VirtualNodes scheme.Proxy Server can works in two modes: 1) As a proxy: your client sends any command to ProxyServer, it find needed CacheServer node, sends command to this node and returns this response to your client 2) As a mediator: your client sends a key to ProxyServer, it founds needed CacheServer node and returns address of this node for Client. After that your client can sends command to specified CacheServer.
  3. https://github.com/justgoodman/hipster-cache-client this is Client (this client uses Proxy Server as a mediator, this client sends key to proxy server, proxy server returns CacheServerNode address. After that client sends command to this CacheServerNode.
  4. You can run test for my application using this code: https://github.com/justgoodman/hipster-cache-client/tree/master/test


You can see my kubernetes configs in this folder:
https://github.com/justgoodman/hipster-cache/tree/master/kubernetes

Also you can see how to run application locally using docker-compose: https://github.com/justgoodman/hipster-cache/tree/master/dockerJuno

Scheme of work

Proxy for Anonymus client

Mediator for Golang client

How to run locally

  1. For that you need to copy folder
    https://github.com/justgoodman/hipster-cache/tree/master/dockerJuno to you special docker folder
  2. In this folder do "git clone" for all applications:
    2.1 git clone git@github.com:justgoodman/hipster-cache.git
    2.2 git clone git@github.com:justgoodman/hipster-cache-proxy.git
    2.3 git clone git@github.com:justgoodman/hipster-cache-client.git
  3. If you don't install glide, install it: "go get github.com/Masterminds/glide"
  4. Install all dependencies:
    4.1 cd hipster-cache && glide install && cd ../
    4.2 cd hipster-cache-proxy && glide install && cd ../
    4.3 cd hipster-cache-client && glide install && cd ../
  5. After that you can run all needed envirinment using:
    docker-compose up -d

For run test, you need get docker id for hipster_cache_client:
docker ps | grep client
And using this id run coommand:
docker exec -it 38d87ddbb0bf bash -c "cd /go/src/hipster-cache-client && go test ./test/..."
In my example 38d87ddbb0bf was docker id for hipster_cache_client

It really works!)

How to check it locally

  1. Add to your hosts file, IP your docker machine for "juno.net" , i set my "docker-machine ip"
  2. For directly working with one of 3 HipsterCacheServer you can run:
    telnet juno.net 4022
    telnet juno.net 4032
    telnet juno.net 4001
  3. For send command to ProxyServer you can run:
    telnet jono.net 4001
    In proxy server you can use command:
    GET_SHARD key
    This command returns shard address for specified key
    Locally you can observe:
    Consul: http://juno.net:8500/ui
    Promehteus: http://juno.net:9090
    Grafana: http://juno.net:3001/login<br

Consul

All services registereg on consul, also we have health check for CacheServers
You can see working consul service on this link:
http://104.155.104.83:8500/ui/#/dc1/services

Prometheus

All services send metrics to Prometheus You can see working prometheus service in this link:
http://104.199.49.154:9090/targets

Grafana

You can see hipster-cache dashboard in this link:
http://104.155.54.131:3000/dashboard/db/hipster-cache
Login: admin
Pass: admin

Kubernetes

All services deployed in K8 and Work)

Example of usage:


Hipster-Cache-Proxy

IP: 130.211.82.2
Application port: 4001
Metrics port: 4002
Hipster-Cache-Server1

IP: 104.155.106.91
Application port: 4011
Metrics port: 4012
Hipster-Cache-Server2

IP: 104.155.86.180
Application port: 4011
Metrics port: 4012
Hipster-Cache-Server3

IP: 130.211.69.21
Application port: 4011
Metrics port: 4012

Services:


Pods:


Configs:

Kubernetes config you can see in this page:
https://github.com/justgoodman/hipster-cache/tree/master/kubernetes

API

How to connect

You can connet to ProxyServer: it sends your command to needed CacheServer and returns response of founded CacheServer telnet 130.211.82.2 4001

Also you have opportunity connect directly to needed CacheServer

Proxy

This commands can be runned only on proxy server:
GET_SHARD key

Get shard address of the key.

Examples:
hipster_cache>GET_SHARD hello
"130.211.69.21:4011"

GET_SHARDING

Return information about all shard nodes(address,list of virtual nodes)
Examples:
hipster_cache>GET_SHARDING
"address:10.112.1.37,nodes:[52 95 65 6 28 19 38 47 12 9 87 33 73 26 4 22 43 69 63 51 61 56 1 15 2 82 39 70 20 11 42 5 94]"
"address:10.112.2.34,nodes:[75 37 7 14 85 53 36 17 18 76 99 57 72 31 10 46 93 89 40 91 80 29 88 79 74 98 35 16 3 97 66 92 81 32]"
"address:10.112.1.38,nodes:[62 83 55 64 49 0 25 86 23 96 50 58 48 45 44 8 21 30 90 67 34 84 27 77 68 59 24 54 78 13 41 71 60]"

Common

PING

Response pong. Using for Health Check

Examples:

hipster_cache>ping
"pong"

EXIT

Close connection.

Strings

GET key

Get the value of key. If the key does not exist the special value nil is returned. An error is returned if the value stored at key is not a string, because GET only handles string values.

Exmaples:
hipster_cache>GET nonexisting
(nil)
hipster_cache>SET mykey "Hello"
OK
hipster_cache>GET mykey
"Hello"

SET key value [EX seconds] [PX milliseconds]

Set key to hold the string value.
Options
EX seconds -- Set the specified expire time, in seconds.
PX milliseconds -- Set the specified expire time, in milliseconds.

hipster_cache>SET mykey "Hello"
OK
hipster_cache>SET mykey "Hello" EX 10
OK

Lists

LPUSH key value

Insert specified value at the head of the list stored at key.If key does not exist, it is created as empty list before performing the push operations
hipster_cache>LPUSH junoList Money
OK

LRANGE key start stop

Returns the specified elements of the list stored at key. The offsets start and stop are zero-based indexes, with 0 being the first element of the list (the head of the list), 1 being the next element and so on.

hipster_cache>LPUSH mylist "one"
OK
hipster_cache>LPUSH mylist "two"
OK
hipster_cache>LRANGE mylist 0 1
"one"
"two"

LSET key index value

Sets the list element at index to value

hipster_cache>LPUSH mylist "one"
OK
hipster_cache>LPUSH mylist "two"
OK
hipster_cache>LSET mylist 0 "three"
OK
hipster_cache>LRANGE mylist 0 1
"one"
"three"

LLEN key

Returns the length of the list stored at key

hipster_cache>LPUSH mylist "one"
OK
hipster_cache>LPUSH mylist "two"
OK
hipster_cache>LLEN mylist
"2"

Dictionary

DSET key field value

Sets field in the dictionary at key to value

hipster_cache>DSET item label Juno
OK
hipster_cache>DGET item label
"Juno"

DGET key field

Returns the value associated with field in the dictionary at key.

hipster_cache>DSET item label Juno
OK
hipster_cache>DGET item label
"Juno"

DGETALL key

Returns all fields and values of the dictionary at key. In the returned value, every field name is followed by its value, so the length of the reply is twice the size of the dictionary

hipster_cache>DSET item label Juno
OK
hipster_cache>DSET item color Yellow
OK
hipster_cache>DGETALL item
"label"
"Juno"
"color"
"Yellow"

Theory

Hash tables

For imetentation Hash tables i used chaining scheme, in this scheme, we allocated slice size of "m"(cardinality of a hash function), each element of this slice is chain that has link to the next element.

Evaluation:

  • Memory consumption in O(n+m), where n is number of objects currently stored in the hash table and m is the cardinality of the hash function
  • Operations work in time O(c+1), where c is the lenght of the longest chain

Main problem: how to make both m and c small?

Solve "c" problem

For getting c small, we can use as a hash function random function from univerral family

Lemma

If h(hash function) choosen ramdomly from a "universal family", the average lenght of the longest chain c is O(1+alpha), where aplha=n/m is the load factor of the hash table

Corollaly

If h is form universal family, operation with hash table run on average on time -> O(1+alpha). Alpha is actually contant, so in average operation will run on average in a constant time

Solve "m" problem

For getting m small, at first we set small m and we will increse(double) m iteratively, when loadFactor will be more that 0.9

Universal family hash functions

In string Universal family function we have problem that based on this lemma for small "c",we need to take big "p"(cardinality of the hash function).If wetake big "p" we will consumption to much memory.

What we can do?

We can use Univeral family for integer under the result of string hash function

Algoritm:

  1. Apply random hash function from the polynomial family to the string. We get some integer number module "p"
  2. Apply random hash function the universal family for integers less than "p". We get a number between 0 and m-1

in this formula, we can set a,b randomaly, p is the big prime number, must be more than x

For this algorithm we have this lemma:

So that is not an universal family bease for a universal family there shouldn't be any summon L over p the probability of collision shold be at most 1 over m. But we can be very close to universal family becase we can contol "p".We can make P very big and l/p will be very small and the probabolity of collision we be at most 1/m

For big enought p we will have: c = O(1 + alpha), where c - lenght of the longest chain,apha - load factor

Computing PolyHash(s) runs in time O(|S|)

If lenght of the names are bounded by constant L, computing h(S) takes O(L) = O(1) time

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published
0