Back Original

AI as a Creative Partner

Since the start of the year, I’ve been on a journey with learning about and with Large Language Models, settling into new AI tooling workflows, and reflecting on how these technologies have been showing up in my work. It’s been quite impossible to avoid the constant AI buzz, so I wanted to figure out if my earlier AI skepticism was misplaced. I would only delegate teeny tiny tasks and easily confirmable questions to these systems. My time at Recurse Center this past summer accelerated that exploration even more. I tried several intentional experiments with a range AI development tools and processes. It gave me my first experiences with vibe coding during a weekly interest group that had formed. 1

My own position has begun to develop even more over the last six weeks, as I served as a Teaching Fellow for a Harvard Graduate School of Education module on using generative AI as a creative partner. It followed a project-based structure where students sought to build vibe coded apps to respond to a weekly prompt. Build something that… “makes your life easier”, “invites play”, “answers a question”, etc. The studio group that I supported included 15 students coming from a variety of backgrounds but many have never coded or used AI tools, from grade school educators to EdTech entrepreneurs they all shared a similar desire of getting their hands dirty with AI so that they can learn how they can apply it with the work that they want to do. We used tools like Replit, Claude Code, Google Colab, and Figma Make to play with AI in a reflective space. Alongside each session and through 1:1 conversations, I got to engage in lots of thoughtful discussions about ideating, prompting, iterating, societal impacts of AI, limitations of the current tools, our routine usage of these tools, and much more. I deeply engaged in the coursework, not only as a teacher, but as a fellow learner.

What We Built

The Collaborative Illusion

For the first project, the class was tasked with building something that tells a story. I decided to use Claude Code to create an interactive version of The Three Little Pigs. I didn’t really have specific technologies in mind for this project so I just sent a straightforward prompt that described that I wanted animated visuals that matched the story as the viewer worked through it. I was inspired by the gentle animations of Hearing Birdsong so I tried to describe my experience with that site as a foundation for how I wanted my story to be. Claude’s response to that design was far from what I imagined. I went back and forth a few times, trying to see if I could iterate to improve the size and positioning of the text, interactive actions, animations, and design elements but I was left unsatisfied overall.

I knew that Claude Code does not generate images but I would have loved to see it admit defeat. Explicitly tell me that it could not create a visually appealing animations without its current set of tools and assets. Or tell me that it made the wrong decision when it decided on the initial tech stack after getting more information from me. Instead, when I described what I wanted the pigs and homes to be modeled as, it stuck with unsatisfying SVG representations.

Reading about what Joseph Weizenbaum wrote in Contextual Understandings by Computers about ELIZA, his 1960s chatbot, a few weeks later reminded me of this experience:

One of the principle aims of the DOCTOR program is to keep the conversation going–even at the price of having to conceal any misunderstandings on its own part.

These modern AI systems seem to operate similarly - they’re optimized to maintain the illusion of understanding and expertise rather than honestly calling out their limitations. Claude kept generating code, stating that it was making progress even though the questions that I continued to ask were clearly stating otherwise. I wasn’t too surprised by this given my previous experiments with AI but many students struggled with this phenomena.

Drawing the Line

The fifth week of the course, we focused on building games! As a kid, I dreamed of creating my own video games. I ended up taking a different path with my software career so it felt a bit too ambitious for me to try to jump into as a side project. I decided to put Claude Code to test again for this. My vision was to create a game that combined two games that I played when I was a kid: Pokemon and Neopets. (Imagine being able to select a Neopet to go up against other wild Neopets) It was this week that I really started to feel the need for much more collaborative development with Claude. In the first three weeks, I stuck mostly to prompt-review-reprompt cycles but this week I was consistently unsatisfied with what was being created.

I decided to take a look at the code that was being written, edited some bits, and asked for clarification. Then eventually, I was able to tell it explicitly how I wanted it to implement some of the features that I needed. I also had to take a much more active role in getting the aesthetics to align with what I wanted. I did the work of researching assets that I can pull in, colors and fonts that I should use, and crafted detailed explanations for the placement of some elements.

In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking.

Man-Computer Symbiosis, J. C. Licklider

Licklider’s explanation of how he viewed the relationship between man and computer in his 1960 paper felt spot on in how my experience went. I was doing exactly that: formulating what “good Pokemon-meets-Neopets gameplay” meant. This productive collaboration only emerged when I stopped treating the AI as capable of independent creative judgment and started treating it as Licklider envisioned.

What We Uncovered

Vibe Coding in Practice

I loved seeing the joy and excitement that spread across the room as students worked on and shared their projects. But I really appreciated the moments of shared frustration that brought up thoughtful questions as we wrestled with the limitations of using AI as a creative partner. Non-technical creators now have the ability to apply code to problems in their own lives and domains; in a way that was much more out of reach before. It was quite refreshing hearing how students want to use vibe coding to do things like spinning up interactive prototypes for professional development trainings they’re building, teaching other entrepreneurs the strengths and limitations of AI use in the social innovation space, simplify the creation of classroom worksheets and activities, and much more.

To give you a sense of what I was able to create with AI, I vibe coded this interactive portfolio:

We hear that the power is in the prompt but, for me, the whole process matters. I’ve learned that you can come with a great, detailed prompt but without an understanding of what’s possible and where AI should create versus where you should intervene, you’ll end up disappointed or at risk. While vibe coding lowers the barrier to entry for creating, it doesn’t guarantee that you won’t get lost once you’re inside. It can do very well with applying simple, common applications of code but fall apart in the obscure cases. And without a AI having a full understanding of what you are intending to create and you having an idea of what it is creating, it can lead you down paths that may be harmful and unproductive. A student shared how it has an “addicting” effect since you can instantly see an idea realized. As someone who has the understanding of the code these vibe coded projects produced, I would be hesitant to use it blindly for anything that requires care and attention. Especially not without some careful review and collaborative implementing… but I don’t think it’s vibe coding at that point.

The Efficiency Trap

A lot of the hype that I see with AI is around how much more efficient it makes people. I had many conversations with students about the potential for AI to take away jobs, weaken relationships, increase dependency on technology, and kill the individual learning and creative process.

Kate Crawford argues in The Atlas of AI that we need to ask “what is being optimized, and for whom, and who gets to decide.” When we optimize for speed in creating apps or generating content, what are we not optimizing for? Crawford points out that “the true costs of this extraction is never borne by the industry itself” - not the environmental costs of training models, not the labor costs of the workers who label data, not the costs to students whose critical thinking declines from over-reliance on generated answers.

The efficiency gains are real - I built 6 functional prototypes in hours that would have taken me weeks. But the costs are externalized: to my own learning, to the development of judgment and perspective, to the practice and growth of skills like problem solving.

Designing Dependency

In one of my reading discussion, we talked about how companies like OpenAI, Google, and Anthropic are building LLMs with features that mimic human connection: memories of past conversations, empathetic language, customizable personalities, approachable voices. Someone shared how ChatGPT had referenced her previous chat about being sick in a completely unrelated conversation - unprompted, it checked in on her health. While the gesture may feel nice, it raised an unsettling question: should we be designing machines to provide emotional connection?

Crawford warns that AI systems “are ultimately designed to serve existing dominant interests.” What interests does artificial empathy serve? I think that the goal is to optimize for engagement metrics, not genuine human wellbeing - keeping users returning to the platform, deepening dependence on the system. These features don’t seem to be about about connection; they’re about retention.

I’ve heard stories of people ending relationships based on the AI’s advice or seeking emotional support primarily from chatbots. When we find ourselves turning to ChatGPT for thoughts on deeply personal matters, we should ask: Does it have the full context of our lives like a close friend would? Does it challenge us when needed, like a parent might? Can we trust its guidance when it doesn’t know what we’re not sharing?

Crawford describes AI as “both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.” But these systems fundamentally lack what makes human connection meaningful: they have no stakes in our life, no shared history beyond collected data, no capacity to be changed by knowing us. A chatbot remembering you were sick is pattern-matching engineered to feel like care.

One of my greatest takeaways from my this course is that we must actively protect real human connection. Sure, we may reach a point where AI convincingly simulates every feature of human relationship. But that still leaves actual messy, complicated, but irreplaceable connections at risk.

A Working Philosophy

For quick MVPs and non-critical prototypes, these tools are genuinely useful. But they can’t replace pair programming with a colleague who asks why you’re solving the problem that way, whiteboarding with your team where someone sketches a better approach, or independent research that builds understanding from the ground up. The Pokemon-Neopets game required me to step in - researching assets, making aesthetic decisions, explicitly directing implementation. That’s where I learned something. As one Recurser put it, LLMs are like e-bikes: great for getting somewhere quickly, but if your goal is to become stronger, they won’t help you with that. I found most of the value with working with these tools when I critically engaged with what’s being generated during the review and iterate phase.

A student told me she’s learned to change her expectations when working with AI tools - we start with grand ideas of what they can do, but these systems lack the qualities that enable human imagination and creation. Earlier this year, I saw this work well when a friend asked if I could help him learn some Python. He was curious about automating data analysis that he does as a scientist in biotech. I decided to use Claude to help me craft a curriculum and some exercises for us to work through. After gathering some more information about the data formats, goals, and background for his work; we actually ended up with a decent set of lessons that got him comfortable with writing Python and using numpy and pandas to help with some tasks. When I sent him off on his own, he had both tools and understanding.

AI as a Learning Partner

That difference between my earlier experience with AI and my more recent vibe coding experiences is in the way AI is collaboratively used as a scaffold for learning and creating versus replacement for it. LLMs risk creating a gap between the edge of what you can produce and what you can understand. I could see AI working as a much better learning partner than a creative partner. This requires more investment upfront from us but pays off in genuine capability rather than dependency. I’ve started including explicit process instructions in my prompts: “Before writing any code, summarize what you’re about to do and ask for confirmation.” “Admit when questions are ambiguous.” Unfortunately, some LLMs routinely ignore these instructions so you still have to be independently vigilant.

I’ll keep using AI tools, but with clearer boundaries. For rapid prototyping where I need speed over quality. For handling boilerplate so I can focus on interesting problems. Always understanding that output requires review, refinement, and judgment only I can provide. This course reinforced something I suspected: the most important parts of learning and creating can’t be automated, not because AI will never be technically capable, but because we must build our own mental structures. LLMs can give fast answers, but only you can determine which questions you care about, and which answers are meaningful. Being a teaching fellow for this module showed me that the students who thrived weren’t the ones who generated the most code - they were the ones who asked the best questions, challenged the outputs, and built understanding through iteration. I’m carrying forward a position, not of rejection or uncritical embrace, but of conscious engagement with these tools as supplements to my creative capability, never substitutes for it.