Agent skills have been described as a game-changer. I do not dispute the characterization, but they are also spoken about in rather mystical language, and that does bother me a bit.
Rather than a complex technical breakthrough, they are another example of what Claude Code has been getting right since the beginning: simple but powerful tools (like “Read file”, “Grep”, “Bash”) combined with the model’s natural helpfulness take you a very long way.
In this post we will implement a skill system compatible with Anthropic’s public skills in just over 100 lines of code.
I wrote the original implementation in Solve It, a tool and approach to learning by doing, while taking advantage of AI in a way that accelerates your learning instead of replacing your understanding. Check out the dialog walking through my original implementation process.
Agent skills are a shareable way of giving an agent a new capability. They
consist of a folder with a SKILL.md markdown file in it. The file contains a
name and a description that explains when it should be used. The rest of the
file is the actual instructions.
The key feature of skills is progressive disclosure. On load, agents are only
told about the names and descriptions of the skills. Only when they decide to
“activate” a particular skill do the instructions get loaded into the context.
The SKILL.md file can in turn reference other files with more instructions for
specialized use cases. And the skill can also bundle scripts to be executed
without having to be written by the LLM or read into context. Everything is
loaded just-in-time, conserving context as much as possible.
Skills were first introduced as a Claude Code feature, but they were quickly made into an open standard by Anthropic. Agents like Copilot and ChatGPT followed with support shortly after.
The Agent Skill spec has this to say about skill structure:
Skills are folders containing a
SKILL.mdfile. Your agent should scan configured directories for valid skills.
The SKILL.md file must contain YAML frontmatter followed by Markdown content:
---
name: skill-name
description: A description of what this skill does and when to use it.
---
Some skill instructions.
Let’s start by writing a Skill dataclass to load skills from a folder and
hold their data:
from dataclasses import dataclass
from pathlib import Path
import frontmatter
@dataclass
class Skill:
name: str
description: str
location: str = None
@classmethod
def from_directory(cls, directory):
location = str((Path(directory) / 'SKILL.md').resolve())
post = frontmatter.load(location)
return cls(
name=post['name'],
description=post['description'],
location=location
)
def instructions(self):
post = frontmatter.load(self.location)
return post.content
We used frontmatter
for parsing the YAML section. We are only getting the name and the description
from there, but the spec also describes other optional
keys that you might want
to check out. We stored the absolute path to the skill folder so the agent
knows where to look for more files. And we also added an instructions method
to fetch the entire skill file on demand.
Next let’s write a helper to find the skills in the “configured directories”, so that later we can tell the agent about them.
import os
from typing import List
def load_skills() -> List[Skill]:
return [
Skill.from_directory(full_path)
for skills_root in os.environ.get('SKILLS_PATH', '').split(':')
for directory in Path(skills_root).iterdir()
if (full_path := Path(skills_root) / directory).is_dir()
]
We expect a PATH style environment variable called SKILLS_PATH with a list
of : separated folders that contain skills, and we use Skill.from_directory
to load each one as a Skill object.
The spec also has some information about how to describe the skills to agents:
For filesystem-based agents, include the location field with the absolute path to the
SKILL.mdfile. For tool-based agents, the location can be omitted. Keep metadata concise. Each skill should add roughly 50-100 tokens to the context.
<available_skills>
<skill>
<name>pdf-processing</name>
<description>Extracts text and tables from PDF files, fills forms, merges documents.</description>
<location>/path/to/skills/pdf-processing/SKILL.md</location>
</skill>
</available_skills>
Let’s write a function to load all of the skills and put them in the model’s desired format:
import xml.etree.ElementTree as ET
def available_skills() -> str:
"""Returns a skill description block to be added to the system prompt"""
skills = load_skills()
root = ET.Element('available_skills')
for s in skills:
skill_el = ET.SubElement(root, 'skill')
ET.SubElement(skill_el, 'name').text = s.name
ET.SubElement(skill_el, 'description').text = s.description
ET.SubElement(skill_el, 'location').text = s.location
ET.indent(root)
return ET.tostring(root, encoding='unicode')
skill toolNow we can tie all the discovery helpers together into an agent tool.
def skill(
name: str, # The name of a skill to activate
) -> str: # The contents of the skill's SKILL.md
for skills_root in os.environ.get('SKILLS_PATH', '').split(":"):
skill_path = Path(skills_root) / name
if skill_path.exists():
return Skill.from_directory(skill_path).instructions()
skill.__doc__ = f"""
Execute a skill within the main conversation
When users ask you to perform tasks, check if any of the available skills below
can help complete the task more effectively. Skills provide specialized
capabilities and domain knowledge.
How to use skills:
- Invoke skill using this tool with the skill name only (no arguments).
- When you invoke a skill, the skill's prompt will expand and provide
detailed instructions on how to complete the task.
Important:
- Only use skills listed in <available_skills> below.
- Do not invoke a skill that is already running.
{available_skills()}
"""
We will be using claudette to test the
implementation, so we need to adhere to its tool format.
Python docstrings cannot be f-strings, so we set it separately. We needed
an f-string to be able to use available_skills.
Interestingly, Claude Code doesn’t say anything about skills in its system
prompt, leaving it up to the Skill tool
description.
The above description is adapted from Claude Code’s by removing some parts to
make it more agent-agnostic. We are not doing any fancy context manipulation,
because I found that simply returning the context as a string result from the
tool works well enough.
read toolSo far we could use single .md file skills, but to fully leverage progressive
disclosure the agent needs a way to read other files referenced in SKILL.md.
Let’s add a read tool to help with that.
def is_valid_absolute_path(path: str) -> bool:
path = Path(path)
if not path.is_absolute():
return False
valid_roots = [Path(root) for root in os.environ.get('SKILLS_PATH', '').split(":")]
return any(path.is_relative_to(root) for root in valid_roots)
def read(
absolute_path: str # Absolute path of the file to read
) -> str: # Text contents of the read file
"""Returns the text contents of the file at the given absolute path"""
if not is_valid_absolute_path(absolute_path):
raise ValueError("The path is not absolute, or the agent is not authorized to read it")
with open(absolute_path, 'r') as f:
return f.read()
We started with a helper for making sure the agent knows which file it is
requesting, by checking it knows its absolute path and that it lies within a
skill folder. Remember that the agent is made aware of the skill folder’s
absolute path by the location property.
Next we have the actual read tool. If the helper validates the path, we
simply read the file at that location and return the contents. In the future we
could extend this tool with features like automatically encoding images, but it
works for now.
run toolPackaging executable code into the skill is very handy for token conservation. You could put enough information in the instructions to help the model come up with a one-off script by themselves, but that’s going to take a lot of tokens. You could provide the pre-written script and that would save half of the tokens by making the information directly accessible. But if you make it runnable, the agent doesn’t need to spend any tokens on the implementation details at all.
The flip-side is that this is an awkward way to distribute software. It does put the scripts in the context of a particular skill, but it’s missing key features, like a way to express dependencies. I think it might be better to install the scripts to your agent’s runtime environment through more conventional means, and just refer to them in the skills as something that is available in the system.
Time will tell, but in the meantime, let’s give it a try:
import subprocess
import sys
@dataclass
class RunResult:
stdout: str
stderr: str
return_code: int
def run(
absolute_path: str, # The path to a python script
arguments: List[str] = [], # A list of arguments for the script
) -> RunResult:
"""Runs the python script at the given absolute path"""
if not is_valid_absolute_path(absolute_path):
raise ValueError("The path is not absolute, or the agent is not authorized to read it")
path = Path(absolute_path)
if path.is_dir():
raise ValueError("The path does not point to a script")
result = subprocess.run(
[sys.executable, str(path), *arguments],
capture_output=True,
text=True,
env={**os.environ, 'PYTHONPATH': str(path.parent)}
)
return RunResult(stdout=result.stdout, stderr=result.stderr, return_code=result.returncode)
We are creating a simple dataclass to hold the result of the subprocess call.
claudette will use it to explain to the agent what to expect.
The tool itself reuses the above helper to ensure the file is ok to run, and then tries to run it as a python script.
In a production context this should probably look totally different, support more than python scripts, and take isolation more seriously, but for demonstration purposes:
sys.executable to find the python executable that is running the
current module.PYTHONPATH,
so that scripts can import their siblings. This is probably overkill for a
normal Python installation, but is there such a thing as a normal Python
installation?claudetteLet’s put all of this together and build a tiny skill-capable agent with
claudette. We will need some actual skills to test, so check out Anthropic’s
public skill repository and set your
SKILLS_PATH accordingly.
The agent is very simple to build:
from IPython import display
from claudette import *
chat = Chat('claude-haiku-4-5', tools=[skill, read, run])
prompt = "How should I approach announcing in the company newsletter that we are launching a product?"
for o in chat.toolloop(prompt):
if o.role != 'user':
display(o)
if hasattr(o, 'stop_reason') and o.stop_reason == 'tool_use':
for b in o.content:
if b.type == 'tool_use':
display(f"Used {b.name} with args {b.input}")
If you are not familiar with claudette: toolloop takes care of the
necessary back and forth to have the model request the use of a tool, executing
it, and passing the result back to model.
We skip user role messages because those contain the verbose markdown
contents as tool results, and we highlight tool uses so we can see tools
in action.
The output looks like:
I can help you craft an effective product launch announcement for your company
newsletter! Let me pull up the internal communications resources to guide you
through this.
"Used skill with args {'name': 'internal-comms'}"
Great! Let me pull up the company newsletter guidelines to show you the best
approach:
"Used read with args {'absolute_path': '/app/data/skills/skills/internal-comms/examples/company-newsletter.md'}"
Perfect! Here's how you should approach announcing a product launch in your
company newsletter:
# Key Principles for a Product Launch Announcement
1. Make it a headline item
- Place it in a prominent section like :megaphone: Company Announcements at
the top
... More stuff ...
It works! The agent is identifying which skill to use based on the description, activating it, and reading extra files based on the instructions. And then it uses all of that context to solve the user’s problem.
Let’s try one with run:
from claudette import *
chat = Chat('claude-sonnet-4-5', tools=[skill, read, run])
prompt = "Can you check if the PDF form at /app/data/example.pdf has fillable fields?"
for o in chat.toolloop(prompt):
if o.role != 'user':
display(o)
if hasattr(o, 'stop_reason') and o.stop_reason == 'tool_use':
for b in o.content:
if b.type == 'tool_use':
display(f"Used {b.name} with args {b.input}")
Which outputs something like:
I'll help you check if the PDF form has fillable fields. Let me use the PDF
skill to examine the document.
"Used skill with args {'name': 'pdf'}"
Now let me check the forms.md file to see how to detect fillable fields in
a PDF:
"Used read with args {'absolute_path': '/app/data/skills/skills/pdf/forms.md'}"
Perfect! Now I can check if the PDF has fillable fields using the
provided script:
"Used run with args {'absolute_path': '/app/data/skills/skills/pdf/scripts/check_fillable_fields.py', 'arguments': ['/app/data/example.pdf']}"
Great news! Yes, the PDF form at /app/data/example.pdf has fillable fields.
Notice that the agent was able to run the script without having to read it first.
That’s it — around 100 lines of Python and three straightforward tools. The skill system’s elegance is in what it doesn’t need: no special context window manipulation, no complex orchestration. The skill tool’s docstring carries the skill index, and everything else is just reading files and running scripts on demand.
If you’re building agents, consider adding skill support. It’s a lightweight way to organize prompts, and you might find it replaces some of the subagent orchestration you thought you needed.