- Initialization: Establishing our agent’s “mind”
- Code Technology: Instructing our agent to write down Python scripts
- Library Administration: Enabling our agent to put in needed instruments
- Code Execution: Empowering our agent to run the code it generates
- Command Middle: Making a central hub to handle all these capabilities
Now, let’s break down every of those steps and see how they arrive collectively to type our AI assistant.
Step 1: Initialization — Giving Our Agent Its First Spark of Life
Each nice journey begins with a single step, and on the earth of AI brokers, that step is initialization. That is the place we arrange the fundamental construction of our agent and join it to its major supply of intelligence — on this case, the OpenAI API.
from openai import OpenAI
import os
from google.colab import userdata
import base64
import requests
from PIL import Picture
from io import BytesIO
import subprocess
import tempfile
import re
import importlib
import sysos.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY')
class AgentPro:
def __init__(self):
# Future initialization code can go right here
cross
This snippet is the digital equal of giving life to our AI assistant. We’re importing needed libraries, organising our OpenAI API key, and creating the skeleton of our AgentPro class. It’s like offering a physique for our AI — not very helpful by itself, however important for all the things that follows.
Step 2: Code Technology — Instructing Our Agent to Write Python
Now that our agent has a “physique,” let’s give it the flexibility to suppose — or on this case, to generate code. That is the place issues begin to get thrilling!
def generate_code(self, immediate):
consumer = OpenAI()
response = consumer.chat.completions.create(
mannequin="gpt-4o",
messages=[
{"role": "system", "content": "You are a Python code generator. Respond only with executable Python code, no explanations or comments except for required pip installations at the top."},
{"role": "user", "content": f"Generate Python code to {prompt}. If you need to use any external libraries, include a comment at the top of the code listing the required pip installations."}
],
max_tokens=4000,
temperature=0.7,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
code = re.sub(r'^```pythonn|^```n|```$', '', response.selections[0].message.content material, flags=re.MULTILINE)
code_lines = code.cut up('n')
whereas code_lines and never (code_lines[0].startswith('import') or code_lines[0].startswith('from') or code_lines[0].startswith('#')):
code_lines.pop(0)return 'n'.be a part of(code_lines)
This technique is the crown jewel of our agent’s capabilities. It’s utilizing the OpenAI API to generate Python code based mostly on a given immediate.
Consider it as giving our agent the flexibility to brainstorm and write code on the fly. We’re additionally doing a little cleanup to make sure we get clear, executable Python code with none markdown formatting or pointless feedback.
The parameters we’re utilizing (like temperature and top_p) permit us to manage the creativity and randomness of the generated code. It’s like adjusting the “inspiration” knob on our AI’s creativeness!
Step 3: Library Administration — Equipping Our Agent with the Proper Instruments
Each good coder is aware of the significance of getting the best libraries at their disposal. Our AI assistant is not any completely different. This subsequent technique permits AgentPro to establish and set up any needed Python libraries
def install_libraries(self, code):
libraries = re.findall(r'#s*pip installs+([w-]+)', code)
if libraries:
print("Putting in required libraries...")
for lib in libraries:
attempt:
importlib.import_module(lib.change('-', '_'))
print(f"{lib} is already put in.")
besides ImportError:
print(f"Putting in {lib}...")
subprocess.check_call([sys.executable, "-m", "pip", "install", lib])
print("Libraries put in efficiently.")
This technique is like sending our agent on a purchasing spree within the Python Bundle Index. It scans the generated code for any pip set up feedback, checks if the libraries are already put in, and if not, installs them. It’s making certain our agent at all times has the best instruments for the job, it doesn’t matter what job we throw at it.
Step 4: Code Execution — Bringing the Code to Life
Producing code is nice, however executing it’s the place the rubber meets the highway. This subsequent technique permits our agent to run the code it has generated:
def execute_code(self, code):
with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False) as temp_file:
temp_file.write(code)
temp_file_path = temp_file.titleattempt:
consequence = subprocess.run(['python', temp_file_path], capture_output=True, textual content=True, timeout=30)
output = consequence.stdout
error = consequence.stderr
besides subprocess.TimeoutExpired:
output = ""
error = "Execution timed out after 30 seconds."
lastly:
os.unlink(temp_file_path)
return output, error
This technique is the place the magic actually occurs. It takes the generated code, writes it to a short lived file, executes it, captures the output (or any errors), after which cleans up after itself. It’s like giving our agent arms to sort out the code and run it, all within the blink of a watch.
Step 5: Command Middle — Placing It All Collectively
Lastly, we want a method to orchestrate all these wonderful capabilities. Enter the run technique:
def run(self, immediate):
print(f"Producing code for: {immediate}")
code = self.generate_code(immediate)
print("Generated code:")
print(code)
print("nExecuting code...")
output, error = self.execute_code(code)if output:
print("Output:")
print(output)
if error:
print("Error:")
print(error)
That is the command heart of our AI assistant. It takes a immediate, generates the code, executes it, and reviews again with the outcomes or any errors. It’s like having a private assistant who not solely understands your requests however carries them out and offers you a full report.
Placing It All Collectively:
Now that we now have all our elements, let’s see how we will use our newly minted AI assistant:
if __name__ == "__main__":
agent = AgentPro()
agent.run("""make an in depth deck on one of the best types of management with at
least 10 slides and reserve it to a pptx referred to as management.pptx""")
With this easy command, we’re asking our agent to create a full presentation on management kinds, full with no less than 10 slides, and reserve it as a PowerPoint file.
Our agent will generate the mandatory Python code (possible utilizing a library like python-pptx), set up any required libraries, execute the code to create the presentation, after which report again with the outcomes or any errors encountered.
We’ve simply constructed the muse of a robust AI agent able to producing and executing Python code on demand. From organising its “mind” with the OpenAI API, to giving it the ability to write down and run code, to equipping it with the flexibility to put in needed instruments, we’ve created a flexible digital assistant.
That is just the start of what’s attainable with customized AI brokers. In future installments, we’ll discover improve AgentPro with net looking out capabilities, picture era, and much more complicated decision-making processes.
Keep in mind, with nice energy comes nice accountability. Your new AI assistant is a robust instrument, but it surely’s as much as you to information it properly. Use it to automate tedious duties, discover new concepts, and push the boundaries of what’s attainable with AI.
Simply possibly don’t ask it to write down your marriage ceremony vows or resolve in your subsequent profession transfer — some issues are nonetheless greatest left to human instinct!
Keep tuned for Half B, the place we’ll train our agent some new tips and begin to unlock its true potential. Till then, blissful coding, and will your AI adventures be bug-free and endlessly thrilling!
Comply with for Half B!
In case you are desirous about studying extra about this content material, please subscribe. You can even join with me on LinkedIn
About me
Hello! I’m Hamza, and I’m thrilled to be your information on this thrilling journey into the world of AI brokers. With a background as a Senior Analysis Scientist at Google and educating expertise at prestigious establishments like Stanford and UCLA, I’ve been on the forefront of AI growth and schooling for years. My ardour lies in demystifying complicated AI ideas and empowering the subsequent era of AI practitioners.
Talking of which, if you happen to’ve loved this deep dive into constructing AI brokers from scratch, you could be desirous about taking your LLM data to the subsequent stage. I’ve just lately developed a complete course titled Enterprise RAG and Multi-Agent Applications on the MAVEN platform. This course is tailor-made for practitioners who wish to push the boundaries of what’s attainable with Giant Language Fashions, particularly in enterprise settings.
In Enterprise RAG and Multi-Agent Applications we discover cutting-edge strategies that transcend the fundamentals. From superior Retrieval-Augmented Technology (RAG) options to the most recent strategies in mannequin optimization and accountable AI practices, this course is designed to equip you with the talents wanted to sort out real-world AI challenges.
Whether or not you’re trying to implement state-of-the-art LLM purposes or dive deep into the intricacies of mannequin fine-tuning and moral AI deployment, this course has acquired you coated.