Ever thought of using GPT model for running Kubernetes?

Generating Kubernetes commands with the help of OpenAI’s GPT3 Model.

Tirth Patel
6 min readJun 23, 2021

Let’s start by understanding what’s GPT?

Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3’s full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of pre-trained language representations. Before the release of GPT-3, the largest language model was Microsoft’s Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters — less than a tenth of GPT-3.

The quality of the text generated by GPT-3 is so high that it is difficult to distinguish from that written by a human, which has both benefits and risks. Thirty-one OpenAI researchers and engineers presented the original May 28, 2020 paper introducing GPT-3. In their paper, they warned of GPT-3’s potential dangers and called for research to mitigate risk.

Tech circles have been lit up over the last few weeks following the unveiling by OpenAI of GPT-3, a new type of pretrained language model capable of generating natural language text and computer code with the most minimal of inputs. Could it be a game changer for legal technology, especially in relation to NLP tools that analyse text, as well as doc generation systems?

Artificial Lawyer was initially quite sceptical about GPT-3, ( an unsupervised Transformer language model). How was being able to create auto-generated pages of chimeric text that appear to be written by a real person going to be useful, especially in the legal world? Wasn’t this just a gimmick?

So, in this article we are going to generate kubernetes commands using OpenAI’s GPT Model. In this project, we will be using Minikube as a single node kubernetes cluster and a RHEL8 as Virtual Machine. Our Website based on CGI will be hosted on RHEL8 with httpd webserver.

So, first of all we need to setup the RHEL8 VM as a client of Minikube. For this, we have to copy the admin.conf file from Minikube VM and replace the server hostname to the IP Address and put it inside the RHEl8 VM. For detailed explaination to setup client follow the below link 👇

https://github.com/tiru-patel/GPT3-Kubernetes/blob/main/minikube-rhel8-conf.docx

Let’s look at the code for predicting k8s commands.

Trust me, it’s too easy. We have to create a class GPT where we can add examples to be trained. We have to set the GPT Engine and set the temperature and tokens. Once, we have set the GPT Engine we have to make call to the API for getting the output.

Note: As this model may be very dangerous, OpenAI has given limited access and we need to register for it to get the API Keys.

gpt.py

"""Creates the Example and GPT classes for a user to interface with the OpenAI API."""import openaidef set_openai_key(key):
"""Sets OpenAI key."""
openai.api_key = key
class Example():
"""Stores an input, output pair and formats it to prime the model."""
def __init__(self, inp, out):
self.input = inp
self.output = out
def get_input(self):
"""Returns the input of the example."""
return self.input
def get_output(self):
"""Returns the intended output of the example."""
return self.output
def format(self):
"""Formats the input, output pair."""
return f"input: {self.input}\noutput: {self.output}\n"
class GPT:
"""The main class for a user to interface with the OpenAI API.
A user can add examples and set parameters of the API request."""
def __init__(self, engine='davinci',
temperature=0.5,
max_tokens=100):
self.examples = []
self.engine = engine
self.temperature = temperature
self.max_tokens = max_tokens
def add_example(self, ex):
"""Adds an example to the object. Example must be an instance
of the Example class."""
assert isinstance(ex, Example), "Please create an Example object."
self.examples.append(ex.format())
def get_prime_text(self):
"""Formats all examples to prime the model."""
return '\n'.join(self.examples) + '\n'
def get_engine(self):
"""Returns the engine specified for the API."""
return self.engine
def get_temperature(self):
"""Returns the temperature specified for the API."""
return self.temperature
def get_max_tokens(self):
"""Returns the max tokens specified for the API."""
return self.max_tokens
def craft_query(self, prompt):
"""Creates the query for the API request."""
return self.get_prime_text() + "input: " + prompt + "\n"
def submit_request(self, prompt):
"""Calls the OpenAI API with the specified parameters."""
response = openai.Completion.create(engine=self.get_engine(),
prompt=self.craft_query(prompt),
max_tokens=self.get_max_tokens(),
temperature=self.get_temperature(),
top_p=1,
n=1,
stream=False,
stop="\ninput:")
return response
def get_top_reply(self, prompt):
"""Obtains the best result as returned by the API."""
response = self.submit_request(prompt)
return response['choices'][0]['text']

Now, for prediction we need to send API call to OpenAI’s GPT model. Before that, we need to add examples. Jupyter notebook for testing 👇

https://github.com/tiru-patel/GPT3-Kubernetes/blob/main/GPT3%20for%20Kubernetes.ipynb

gptpredict.py

#!/usr/bin/python3print("content-type: text/html")
print()
import cgi
import subprocess
# import OPENAPI
import json
import openai
# import gpt
from gpt import GPT
from gpt import Example
# OPENAPI KEY
openai.api_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# Set the GPT Engine
gpt = GPT(engine="davinci",
temperature=0.5,
max_tokens=100)
# Add Examples to be trained
gpt.add_example(Example('Launch a myweb deployment with httpd image.',
'kubectl create deployment myweb --image=httpd'))
gpt.add_example(Example('Run a test deployment with vimal13/apache-webserver-php as image',
'kubectl create deployment test --image=vimal13/apache-webserver-php'))
gpt.add_example(Example('Run a webapptest deployment with vimal13/apache-webserver-php as image',
'kubectl create deployment webapptest --image=vimal13/apache-webserver-php'))
gpt.add_example(Example('Run a webapptesting deployment with httpd as image',
'kubectl create deployment webapptesting --image=httpd'))
gpt.add_example(Example('Launch a deployment with name as webapp and image as httpd',
'kubectl create deployment webapp --image=httpd'))
gpt.add_example(Example('Create a pod with name as testing and image as httpd',
'kubectl run testing --image=httpd'))
gpt.add_example(Example('Launch a pod with webpod as name and vimal13/apache-webserver-php as image',
'kubectl run webpod --image=vimal13/apache-webserver-php'))
gpt.add_example(Example('Launch a pod with webtest as name and httpd as image',
'kubectl run webtest --image=httpd'))
gpt.add_example(Example('Delete deployment with name test',
'kubectl delete deployment test'))
gpt.add_example(Example('Delete deployment with name webapp',
'kubectl delete deployment webapp'))
gpt.add_example(Example('Delete a pod with name webtest',
'kubectl delete pod webtest'))
gpt.add_example(Example('Expose the deployment test as NodePort type and on port 80',
'kubectl expose deployment test --port=80 --type=NodePort'))
gpt.add_example(Example('Expose the deployment webtest as External LoadBalancer type and on port 80',
'kubectl expose deployment webtest --port=80 --type=LoadBalancer'))
gpt.add_example(Example('Expose the deployment webapp as ClusterIP type and on port 80',
'kubectl expose deployment webapp --port=80 --type=ClusterIP'))
gpt.add_example(Example('Create 5 replicas of test deployment',
'kubectl scale deployment test --replicas=5'))
gpt.add_example(Example('Create 3 replicas of webapp deployment',
'kubectl scale deployment webapp --replicas=3'))
gpt.add_example(Example('Delete all resources of Kubernetes',
'kubectl delete all --all'))
gpt.add_example(Example('Get the list of deployments',
'kubectl get deployments'))
gpt.add_example(Example('Get the list of services',
'kubectl get svc'))
gpt.add_example(Example('List all the pods',
'kubectl get pods'))
f = cgi.FieldStorage()
prompt = f.getvalue('x')
# Getting the Prediction
output = gpt.submit_request(prompt)
res = output.choices[0].text
cmd = res.split("output")[1].split(":")[1].strip()
cmd = cmd + " --kubeconfig /root/kubews/admin.conf"
print(cmd)
print()
output = subprocess.getoutput('sudo ' + cmd)
print(output)

This the main CGI file which will be called using AJAX from the index.html page.

Output of the WebApp:

It’s just awesome 🤩. I hope you have enjoyed and understood the article. Below is my github link for your reference

https://github.com/tiru-patel/GPT3-Kubernetes

Thanks for reading 😃

--

--