Objective :
- Create a get, put and post api using python based flask framework
- Invoke the api using a python based rest client
- Set up an AWS load balancer in front of the server
- Launch a swarm of 1000+ clients to invoke the rest api from a python client using aws EC2 Auto Scaling feature
- Both the server and client code will be uploaded to a s3 bucket from where it will downloaded at run-time when new clients or servers are launched and run. It will save us the arduous task of updating all 1000+ clients/servers either when the codebase has changed or we scale from 1000 to 5000 clients.
High level steps on the server side
- Setup an aws Network Load Balancer
- Create a Target group attached to the load balancer
- Create a new public certificate using aws certificate manager.
- Create an EC2 instance
- Install all required software.
sudo apt install python3-pip
pip3 install flask-restful
- Generate a certificate and private key
openssl req -x509 -newkey rsa:4096 -nodes -out cert.pem -keyout key.pem -days 365
make sure you specify the common name as the load balancer DNS Name or the Route 53 mapping corresponding to that name
- Launch the server application. (Code for the server app.py is provided at the end of this section)
sudo python3 app.py
and test it from another standalone client to verify it works. Two ways of testing from client are listed below.
- The curl based method which can be used for quick checking and
- The python client based which will be later used when we autoscale the aws EC2 instances.(Code for the client app.py is provided at the end of this section)
echo quit | openssl s_client -showcerts -servername ALB-xxxxxxxxxxx.elb.eu-east-x.amazonaws.com -connect ALB-xxxxxxxxxxx.elb.eu-east-x.amazonaws.com:443 > cacert.pem
curl --cacert cacert.pem https://ALB-xxxxxxxxxxx.elb.eu-east-x.amazonaws.com/device/FME2445
- create a new s3 bucket say s3://server-code-template and upload the app.zip file with app.py, cert.pem and key.pem files into it.
- Create a new cronfile.txt with the following code
@reboot . /home/ubuntu/startupscript.sh
startupscript.sh contains the following code
#!/bin/bash -x
DATE=$(date +'%F %H:%M:%S')
DIR=/home/ubuntu
echo "Inside startupscript" > $DIR/scriptoutput.txt
sudo rm -rf artifactsfroms3 >> $DIR/scriptoutput.txt
mkdir artifactsfroms3 >> $DIR/scriptoutput.txt
/usr/local/bin/aws s3 cp --recursive s3://server-code-template/ /home/ubuntu/artifactsfroms3/ --profile default --debug >> $DIR/scriptoutput.txt
alias RUN_DIR='cd /home/ubuntu/artifactsfroms3'
RUN_DIR
pwd
unzip -o app.zip >> $DIR/scriptoutput.txt
chmod +x app.py >> $DIR/scriptoutput.txt
sudo python app.py >> $DIR/scriptoutput.txt
echo "file executed" >> $DIR/scriptoutput.txt
- Shutdown the instance.
- Create an ami from the instance
- Create a Launch template from the ami.
- Create an AutoScaling group from the Launch Template with 0 instances
- Scale up the autoscaling group to 2 instances
- Add the 2 new instances to target group and ensure that they pass health checks.
- Post data to load balancer from a standalone client
sudo python3 appclient.py
- Putty into the servers to view the server logs.
Code for app.py
from flask import Flask
from flask_restful import Api, Resource, reqparse
app = Flask(__name__)
api = Api(app)
devices = [
{
"deviceSerialNumber": "ELK9045",
"type": 42,
"location": "New York"
},
{
"deviceSerialNumber": "FME2445",
"type": 42,
"location": "London"
},
{
"deviceSerialNumber": "JCB2489",
"type": 42,
"location": "Tokyo"
}
]
class Devices(Resource):
def get(self, deviceSerialNumber):
for device in devices:
if(deviceSerialNumber == device["deviceSerialNumber"]):
return device, 200
return "Device not found", 404
def post(self, deviceSerialNumber):
parser = reqparse.RequestParser()
parser.add_argument("type")
parser.add_argument("location")
args = parser.parse_args()
for device in devices:
if(deviceSerialNumber == device["deviceSerialNumber"]):
return "Device with deviceSerialNumber {} already exists".format(deviceSerialNumber), 400
device = {
"deviceSerialNumber": deviceSerialNumber,
"type": args["type"],
"location": args["location"]
}
devices.append(device)
return device, 201
def put(self, deviceSerialNumber):
parser = reqparse.RequestParser()
parser.add_argument("type")
parser.add_argument("location")
args = parser.parse_args()
for device in devices:
if(deviceSerialNumber == device["deviceSerialNumber"]):
device["type"] = args["type"]
device["location"] = args["location"]
return device, 200
device = {
"deviceSerialNumber": deviceSerialNumber,
"type": args["type"],
"location": args["location"]
}
devices.append(device)
return device, 201
def delete(self, deviceSerialNumber):
global devices
devices = [device for device in devices if device["deviceSerialNumber"] != deviceSerialNumber]
return "{} is deleted.".format(deviceSerialNumber), 200
api.add_resource(Devices, "/device/")
app.run(debug=True,host='0.0.0.0',port=443,ssl_context=('cert.pem', 'key.pem'))
Code for appclient.py
import json
import requests
def consumeGETRequestSync():
clientCrt = "cert.pem"
clientKey = "key.pem"
url = "https://ALB-xxxxxxxxxxx.elb.eu-east-x.amazonaws.com/device/JCB2489"
certServer = 'cacert.pem'
headers = {'content-type': 'application/json'}
r = requests.get(url,verify=certServer, headers=headers, cert=(clientCrt, clientKey))
print(r.status_code)
print(r.json())
def consumePOSTRequestSync():
clientCrt = "cert.pem"
clientKey = "key.pem"
url = "https://ALB-xxxxxxxxxxx.elb.eu-east-x.amazonaws.com/device/ABJX9357"
deviceParam = {"deviceSerialNumber": "ABJX9357","type": 20,"location": "Delhi"}
certServer = 'cacert.pem'
headers = {'content-type': 'application/json'}
r = requests.post(url, data=json.dumps(deviceParam), verify=certServer, headers=headers, cert=(clientCrt, clientKey))
print(r.status_code)
print(r.json())
# call
consumeGETRequestSync()
#consumePOSTRequestSync()
High level steps on the client side
- create a new s3 bucket say s3://client-code-template and upload the appclient.zip containing appclient.py and cacert.pem files into it.
- Create an EC2 instance
- Create a new cronfile.txt with the following code
@reboot . /home/ubuntu/startupscript.sh
startupscript.sh contains the following code
#!/bin/bash -x
DATE=$(date +'%F %H:%M:%S')
DIR=/home/ubuntu
echo "Inside startupscript" > $DIR/scriptoutput.txt
sudo rm -rf artifactsfroms3 >> $DIR/scriptoutput.txt
mkdir artifactsfroms3 >> $DIR/scriptoutput.txt
/usr/local/bin/aws s3 cp --recursive s3://client-code-template/ /home/ubuntu/artifactsfroms3/ --profile default --debug >> $DIR/scriptoutput.txt
alias RUN_DIR='cd /home/ec2-user/artifactsfroms3'
RUN_DIR
pwd
unzip -o appclient.zip >> $DIR/scriptoutput.txt
chmod +x appclient.py >> $DIR/scriptoutput.txt
sudo python appclient.py >> $DIR/scriptoutput.txt
echo "file executed" >> $DIR/rootscriptoutput.txt
- Fire sudo reboot at command prompt to ensure that the files are downloaded from S3 and the scripts executed to fetch and install the latest executable (appclien.py) on client, and the client script invokes the POST method to post data onto server
- Shutdown the instance.
- Create an ami from the instance
- Create a Launch template from the ami.
- Create an AutoScaling group from the Launch Template with 0 instances
- Scale up the auto group to 1 instance
- Once the instance is finished initializing, putty into the instance to view the client logs to ensure data is sent to server.
- You are now ready to scale it up to 1k+ instances
Detailed steps
- Creating a load balancer











- Creating an ami from the server EC2 instance


- Create a launch template



- Create an autoscaling group





- Scale up the clients




Next navigate to the EC2 instances dashboard and you will find your new instance spinning up. Watch the initializing hourglass. Click refresh till the hourglass goes away. This might take some time since we have added the Startup client script to the reboot cronjob of the instance AMI.
