- python3.x
- pip
- venv
- Base/Ping -
http://127.0.0.1:5000/
ORhttp://127.0.0.1:5000/ping
- Factorial -
http://127.0.0.1:5000/factorial?n=3
- Fibonacci -
http://127.0.0.1:5000/fibonacci?n=4
- Ackermann -
http://127.0.0.1:5000/ackermann?m=1&n=1
- response body is a JSON string. eg -
{"algo":"factorial","result":1,"status":"success"}
- response content-type is
application/json
- response code is 200 for a success or 400 for failure
- The project is build on
Flask 2.0
. - project root - contains
requirements.txt
,README.md
,src
,Dockerfile
,docker-compose.yaml
,venv(virtual environment)
- src - source code of the project.
- Packages - api, app, tasks, tests
- Files - wsgi.py, settings_common.py, settings_testing.py
- wsgi.py - entrypoint for the flask application
- settings_common.py & settings_testing.py - for settings
- app - package for creating the flask app instance
- api - package for creating blueprints and api endpoints
- tasks - package for math functions
- tests - package for unittests. Contains more packages within it.
- test_algos - package & class for unittests of tasks
- test_apis - package & class for unittests of api endpoints
This requires python3.x installed on the machine
- Create virtual env & activate
python3 -m venv venv
source venv/bin/activate
- Install requirements
pip install -r requirements.txt
- Run the webserver
PYTHONPATH=src SIMPLE_SETTINGS=settings_common gunicorn --workers 3 --bind 127.0.0.1:5000 wsgi:app ##with gunicorn (wsgi server)
PYTHONPATH=src SIMPLE_SETTINGS=settings_common,settings_testing flask run ##without gunicorn
- Run the Testcases
PYTHONPATH=src SIMPLE_SETTINGS=settings_common,settings_testing flask test
- Build & Run, entry point gunicorn command.
docker-compose up --build
- Put
-d
switch in the end to run the container in the detached mode - Run the test cases
docker-compose run -e SIMPLE_SETTINGS=settings_common,settings_testing web flask test
- Stop & clean up containers
docker-compose down
- A decorator
@timeit
is placed on the function - It captures the runtime of math function and writes on the
logger
logger
is using std out but can be changed to log file.LOG_LEVEL
is set toDEBUG
explicitly in settings_common.py for displaying runtime & not recommended in production
- VM Machine (AWS EC2 and Elastic IP)
- Open EC2 console on web
- Create a new EC2 with free AMIs like aws-linux
- Attach VPC, Attach volume, Setup ssh key pair, Assign security groups
- Allow port 80, 443 ports in security groups
- SSH to server using ssh key pair
- SCP the code to ec2-user and follow the native setup of project
- Install Nginx
- Update the nginx.conf for setting upstream to port 5000
- Alternatively run gunicorn with bind unix socket and set Nginx upstream to unix socket
- VM Machine (AWS AutoScaling groups & EC2)
- Create a VM as above
- Create an AMI of the VM
- Create a launch template for Autoscaling
- Choose the AMI that we created
- Select instance type like m1.large, t2.large etc and set security group
- Create auto-scaling groups using Wizard
- Keep the size of group to be 2 for High availability. Default is 1.
- Setup ELB and point the traffic to auto-scaling group
- Serverless (AWS Lambda)
- Use python
zappa
package to deploy on AWS Lambda - Activate the virtual env and then install
pip install zappa
- Create AWS user & group in IAM and assign lambda execution role. For quick setup use
AWSLambdaFullAccess
, but FullAccess not recommended. - Use
access_key_id
&secret_access_key
of the user inzappa_settings.json
- Run
zappa deploy dev
. It returns the web service URL. - Changes in app -
zappa update dev
- Stop & remove the app -
zappa undeploy dev
- Use python
- Containers (AWS ECS)
- Run & test the app locally using docker setup
- Install AWS Cli
- Build the container using
docker build -t image_name:version
for pushing to ECR - Assuming ECR(private container registry) setup already done, tag the image using
docker tag
and push usingdocker push
- Assuming ECS cluster setup is done using either
AWS EC2
orAWS Fargate
- Create Task definition specifying task role, task memory and cpu units
- Create Cluster service specifying Task definition, number of tasks, and deployment type(rolling/blue-green)
- Create app load balancer and attach service and port number
- Containers (AWS EKS)
- Assuming EKS is already setup
- Create deployment YAML file mentioning name, metadata, replicas, selectors etc.
- run
kubectl apply -f app-deployment.yaml
- expose the app using kubectl service
- Cloudwatch monitoring can be enabled.
- APM like newrelic can also be used for app performance & monitoring
- Sentry can be added for unhandled exceptions
ASGI
can be implemented for high throughput- environment variables can be moved
.env
files - http response codes being used are only 200 & 400 for quick implementation