The Journey:
Building the Busylight

One Man, Three AIs, and a Pile of LEDs

The Spark

It started, as many projects do, with a simple annoyance. I work in an office where people are constantly in and out of meetings, on calls, or trying to concentrate. There is no easy way for a coworker walking past to know whether it is a good time to stop by or if they should come back later. I had seen commercial busy light products online, the kind that glow red or green outside your cubicle, but they were expensive, limited in features, and typically tied to a single software ecosystem. I figured I could build something better. I had no idea what I was signing up for.

In late August of 2025, I sat down with a Raspberry Pi Zero that I had been tinkering with and opened a conversation with Google Gemini. I had a paid Gemini subscription at the time, and it seemed like the natural place to start. My first question was simple: how do I wire up some LEDs and buttons to my Pi? Gemini walked me through it patiently, explaining GPIO pins, resistors, breadboard layouts, and basic Python scripting. Within a day, I had three LEDs lighting up when I pressed three buttons. It was a small victory, but it felt enormous. I was not a programmer. I had never written a line of Python in my life. But the light came on, literally, and I was hooked.

The next day, I connected a tiny OLED display I had lying around, a 0.91-inch screen barely bigger than my thumbnail. Gemini helped me get text rendering on it. Suddenly, the project had a face. I could press a button and the display would announce: The light is set to Available. After five seconds of inactivity, it would dissolve into a Matrix-style digital rain animation, green characters cascading down the tiny screen. It was completely unnecessary and absolutely delightful.

I named it the BusyLight, and the project grew from there far beyond anything I had originally imagined.

From Pi to ESP32: The Hardware Pivot

The Raspberry Pi was a great learning platform, but I quickly ran into its limitations for what I wanted to build. The LEDs were dim because the Pi's GPIO pins can only supply a small amount of current. I asked Gemini about using transistors to drive brighter LEDs, and it patiently walked me through NPN transistor circuits, base resistors, and Ohm's Law calculations. But even with transistors, the Pi felt like overkill for what was essentially a smart light, a full Linux computer running an operating system just to flip some pins on and off.

Around mid-September, I received a set of ESP32 microcontroller boards in the mail. The ESP32 is a tiny, inexpensive chip with built-in WiFi and Bluetooth, designed specifically for Internet of Things projects. Gemini suggested the switch, and it was the right call. The ESP32 could do everything I needed at a fraction of the power and cost of the Pi. But it meant starting over. The programming language changed from Python to C++, the development environment shifted from writing code on the Pi itself to compiling sketches in the Arduino IDE on my laptop and uploading them over USB. Everything I had learned about GPIO pins and button logic still applied conceptually, but the syntax and tooling were completely new.

This is where the reality of working with AI became clear for the first time. Gemini had been my sole advisor, and it was good at explaining concepts and writing starter code. But as the project grew more complex, I started noticing cracks. The ESP32 Arduino framework had recently undergone a major version change from 2.x to 3.x, and the two versions handled LED control differently. The old ledcSetup() and ledcAttachPin() functions that Gemini kept suggesting simply did not exist in the newer version. I spent hours battling compilation errors that the AI confidently assured me it had fixed, only to hit the same wall again. Gemini would apologize, offer a revised solution, and sometimes that solution was the exact same code it had given me three messages earlier.

This was my first serious encounter with AI hallucination, the phenomenon where an AI generates plausible-sounding but factually incorrect information with complete confidence. It would not be my last.

Enter ChatGPT: The Second Opinion

Frustrated with the PWM compilation errors, I did something that would become a recurring pattern throughout the project: I took the problem to a different AI. I had activated a free trial of ChatGPT Plus and decided to see if a fresh perspective might help. I copied my error messages and code directly from the Gemini conversation and pasted them into ChatGPT.

ChatGPT was immediately helpful in a different way than Gemini. Where Gemini tended to give me expansive, tutorial-style responses with a lot of context and emoji, ChatGPT was more surgical. It quickly identified the version incompatibility and suggested pinning my Arduino ESP32 core to version 2.0.17 to avoid the breaking API changes. This was a pragmatic solution that Gemini had never offered, instead spending message after message trying to make the new API work.

From that point forward, I began working with both AIs simultaneously. I would develop with Gemini, hit a wall, and then bring the problem to ChatGPT. Sometimes ChatGPT would solve it outright. Sometimes it would give me a clue that I could bring back to Gemini. And sometimes, both of them would be wrong in complementary ways, and I would have to synthesize my own solution from the wreckage of their suggestions. It was like having two tutors who each knew different things and were both occasionally making things up.

The interplay between the AIs became a project within the project. I learned to recognize each one's strengths. Gemini was better at big-picture architecture discussions and would often suggest features I had not considered. ChatGPT was better at debugging specific code problems and tended to give more concise, actionable answers. But both had blind spots. Both would sometimes insist that a function existed when it did not, or that a library supported a feature when it had been deprecated years ago. The human in the loop, me, became the quality control system. I could not write the code myself, but I could tell when something was not working, and I could decide which AI's suggestion to trust in any given moment.

The Calendar Problem

With the basic hardware working, buttons toggling LEDs and special effects like disco mode responding to long presses, it was time for the feature that would make BusyLight actually useful: automatic calendar integration. The idea was simple in concept. The light should turn red when I am in a meeting and green when I am free, all by reading my Microsoft 365 work calendar. The execution was anything but simple.

The ESP32 cannot directly query Microsoft's servers. It is a microcontroller with limited memory and processing power, not equipped to handle OAuth authentication flows or parse complex XML. The solution, which emerged through conversations with both Gemini and ChatGPT, was to create a middleman: a Google Apps Script that would fetch my calendar data from an ICS feed, process it into a simple JSON response, and serve it to the ESP32 through a lightweight API endpoint.

This introduced an entirely new layer of complexity. Now I was not just writing Arduino C++ for the ESP32; I was also writing JavaScript for Google Apps Script, managing Google Cloud deployments, and dealing with ICS calendar format parsing. Each of these brought its own challenges, and each became a new arena for AI collaboration and frustration.

The ICS parsing was particularly painful. Calendar data in ICS format is deceptively complex. Times can be in UTC or local time. All-day events have a different format than timed events. Recurring meetings use RRULE definitions that look like cryptic incantations: RRULE:FREQ=MONTHLY;BYDAY=4TH means the fourth Thursday of every month, but the raw ICS file only contains a single event entry with the original start date. Your calendar application expands that rule into visible occurrences, but the raw data just has the formula. Our parser needed to do that expansion, which is essentially date math with edge cases piled on edge cases.

I went through multiple iterations of the calendar parser, often bouncing between Gemini and ChatGPT when one would produce code that broke in unexpected ways. ChatGPT helped me fix UTC parsing issues where meeting times were showing up hours off. Gemini helped redesign the Google Script to query the original calendar directly instead of relying on a subscribed copy that had a frustrating sync delay. At one point, I was carrying code snippets between three browser tabs like a diplomat shuttling between warring nations, each AI building on work the other had started, occasionally undoing work the other had done.

The timezone handling alone nearly broke me. The ESP32 needed to know local time to determine business hours. The Google Script was running in Google's cloud, which uses UTC. The ICS feed could contain times in either format. An AI would suggest a fix that worked for one scenario and broke another. We went through a period where meetings were being detected an hour early, then five hours late, then not at all. Each fix introduced new bugs in a whack-a-mole pattern that tested my patience and my trust in AI-generated code.

The Version March

By late September, I was iterating on the firmware at a furious pace. The version numbers tell the story: 3.1, 3.2, 3.4, 3.6, 3.8, then jumping to 5.0, 5.1, 5.3, 5.5. Each version represented a milestone, and each milestone was hard-won. Version 3.1 added business hours enforcement. Version 3.4 introduced multi-network WiFi support so the device could connect at both home and work. Version 3.6 brought WiFiManager, a library that lets the ESP32 create its own WiFi portal for configuration, eliminating hardcoded network credentials. Version 5.0 was the first stable calendar integration.

The jump from 3.x to 5.0 was not a smooth evolution. It was a near-complete rewrite. The calendar integration required ArduinoJson for parsing, FreeRTOS tasks for non-blocking operation, and a fundamental rethinking of the main loop. I was no longer just reading buttons and setting LEDs; the device had to maintain a WiFi connection, periodically poll an API, parse the response, calculate time differences, manage business hours logic, handle manual overrides, and run visual effects simultaneously without any of those tasks blocking the others. For someone who had never written a line of code before August, it was like learning to juggle while riding a unicycle.

The code was credited to Mr. G. Pete Tea & Gemini/Claude in the file headers, an acknowledgment that it was a collaborative effort with multiple AI partners. ChatGPT's name was missing from those headers only because I started using it slightly later, but its contributions were woven throughout.

Version 5.3 brought a particularly memorable crisis. I was battling memory corruption issues on the ESP32. The device would run fine for hours and then crash unpredictably. ChatGPT diagnosed it as a stack overflow caused by the logging system, a FreeRTOS task that was trying to send log messages over HTTP on a secondary core with too small a stack allocation. These are the kinds of problems that are brutally difficult for a non-programmer to debug. I could describe the symptoms, the AI could analyze the code, but the back-and-forth took dozens of messages as we tested hypotheses. The eventual fix, switching to local-only serial logging and removing the network-based logging task, was simple in retrospect but took an entire exhausting evening to reach.

Scaling Up: Five Devices and a Hub

One BusyLight worked. Now I wanted five. The vision was to deploy them to coworkers in my office, each monitoring its owner's calendar and displaying their availability. This meant solving an entirely new set of problems.

The most significant challenge was networking. At work, I have no admin rights on the enterprise WiFi. I cannot see what devices are connected, I cannot configure firewall rules, and I cannot guarantee that a random ESP32 microcontroller will be allowed to stay on the network. The original plan was to use my Raspberry Pi 3B+ as a hub: it would create its own WiFi access point that the ESP32s would connect to, while simultaneously connecting to the work network through a USB WiFi adapter to reach the internet.

This plan sounded elegant in theory. In practice, it was a nightmare. The USB WiFi adapter I purchased on Claude's recommendation, a TP-Link TL-WN725N, turned out to use a Realtek chipset that is notoriously difficult to run in access point mode on Linux. I ended up in a long troubleshooting session with ChatGPT about driver hacks, only to hear ChatGPT sympathetically inform me that Claude had, in its words, led me astray. We eventually got a working configuration by reversing the plan: using the Pi's built-in WiFi as the access point and the USB adapter for the uplink. But the system was fragile. It worked with one or two ESP32 devices connected but would become unstable with three or more.

The instability of the Pi hub became a recurring theme. Startup routines would not execute reliably. MQTT connections, the messaging protocol I was using for device communication, would drop unpredictably. The touchscreen display we had added would render but lose its touch functionality. I spent weeks troubleshooting, rebuilding the Pi from scratch multiple times, adjusting network configurations, and growing increasingly frustrated.

Enter Claude: The Cloud Pivot

By November, my ChatGPT trial had expired and I had begun working primarily with Claude, Anthropic's AI assistant. I brought the entire project to Claude, all the accumulated code, the Node-RED dashboard flows, the Google Script, and the growing list of problems with the Pi hub. Claude's first major contribution was a question that cut through months of accumulated complexity: why was I running infrastructure on a Pi at all?

The suggestion was to move everything to the cloud. Instead of running a local MQTT broker on the Pi, use HiveMQ Cloud, a free-tier managed MQTT service. Instead of hosting Node-RED on the Pi, run it on an Oracle Cloud virtual machine, also free-tier. The ESP32 devices would connect directly to my work WiFi and communicate with cloud services over the internet. The Pi, which had been the source of so much instability, could be retired from its central role entirely.

This architectural shift was transformative. The cloud services were rock-solid in a way the Pi had never been. HiveMQ handled MQTT messaging without dropping connections. Node-RED on Oracle Cloud ran continuously without the mysterious crashes and startup failures that plagued the Pi. The system went from requiring constant babysitting to something I could set up and walk away from.

Claude also led a systematic code review that resulted in version 8.0, which cut the ESP32 firmware from roughly 1,480 lines to 870 lines, a 41 percent reduction, while maintaining all functionality. The Google Script saw a similar cleanup, going from 320 lines to 230. All the cruft accumulated over months of iterative development, unused variables, dead code paths, stale comments from abandoned approaches, was stripped away. It was like clearing out a garage after a year of renovations.

The Pi did keep one role: a Raspberry Pi Zero with a 2.13-inch e-ink display became a dedicated status monitor, subscribing to the cloud MQTT broker and displaying an at-a-glance overview of all five devices. It was a fitting evolution. The Pi went from trying to be the entire infrastructure to doing one simple thing well.

Building the Dashboard and Finding Free Cloud Services

Running five IoT devices without admin access to the work network meant I needed a way to monitor and control them remotely. The answer was Node-RED, a visual programming tool designed for wiring together IoT devices, APIs, and services through a drag-and-drop browser interface. Node-RED became the nervous system of the BusyLight project, and building its dashboard was its own long adventure.

The dashboard started simple: a single page with panels for each of the five devices showing their online status, current mode, and recent activity logs. A Master Control section at the top provided colored circles representing each device's state and buttons to send commands to all devices at once, things like switching everyone to disco mode or forcing an auto-refresh. Over time, I added clickable LED indicators, mode controls for each individual device, meeting information displays showing the next upcoming meeting, and system health statistics like MQTT connection status and uptime.

The dashboard went through several redesigns. Early on, everything was crammed onto a single page, which became unwieldy as features accumulated. I reorganized it into separate tabbed pages, one for Master Control and one for each device, with the device names on the Master Control page acting as clickable links to navigate to each device's detail view. At one point, I explored using Node-RED subflows to create a single device template that could be reused five times, eliminating the tedious process of copying changes across five identical panels. Claude and I spent time on this approach before discovering that subflows in Node-RED had practical limitations that made them more trouble than they were worth. We abandoned the idea and went back to individual panels, a pragmatic decision over a theoretically elegant one.

When the Raspberry Pi was still the hub, I also tried to add a 3.5-inch touchscreen LCD to it, displaying a compact version of the Master Control panel. This involved a whole detour into finding compatible LCD drivers, dealing with resistive touchscreen calibration, and creating a size-optimized layout for 320 by 480 pixels. The touchscreen would intermittently lose its touch functionality while still displaying correctly, one of many symptoms of the Pi being asked to do too much simultaneously.

The journey to cloud hosting was not a straight line. When Claude first suggested moving off the Pi, we explored several options. FlowFuse Cloud, a hosting service built specifically for Node-RED by its own development team, seemed like the perfect fit until I discovered that their free tier was actually just a fourteen-day trial, after which it cost fifteen dollars a month. This was a classic case of the AI suggesting a product based on what it knew from training data without having current pricing information. Another option explored was Render, a cloud platform with a free tier, but it had a critical limitation: free instances would spin down after inactivity and lose their data. Node-RED flows would need to be re-imported every time the service restarted, which defeated the purpose.

Oracle Cloud's Always Free tier turned out to be the winner, but setting it up was not trivial. It involved creating a virtual machine, configuring SSH keys, installing Node-RED manually, setting up PM2 as a process manager to keep it running, configuring Nginx as a reverse proxy, and opening the right ports in both the Linux firewall and Oracle's cloud security lists. Claude walked me through each step, and to its credit, the resulting setup has been remarkably stable. When Oracle's trial period ended and I received a somewhat alarming email about it, I had a brief moment of panic before confirming that the compute, storage, and networking resources I was using all fell under the perpetually free tier.

HiveMQ Cloud for MQTT was more straightforward. Their free tier genuinely supports up to a hundred connections with ten gigabytes of monthly data transfer, far more than my five devices would ever need. The setup was simple: create a cluster, generate credentials, update the ESP32 firmware to point to the cloud broker instead of the local Pi, and everything just worked. The relief of having MQTT connections that did not randomly drop after three devices was palpable.

The MQTT topic structure itself went through an evolution. Early in the project, topics were inconsistent. Some used a flat format like busylight1/status while others used a hierarchical format like busylight/1/status. Device-specific topics and broadcast topics overlapped in confusing ways, leading to message spam where the dashboard's activity feed would fill with raw JSON dumps every few seconds. Claude helped standardize everything to a clean hierarchical structure that enabled proper MQTT wildcard subscriptions and made Node-RED filtering dramatically simpler. This was the kind of architectural cleanup that would have been obvious to an experienced developer from the start but that only became apparent to me after living with the consequences of organic, unplanned growth.

The logging system went through a similar rationalization. At one point, there were three separate logging outputs running simultaneously: serial output over USB for local debugging, a web interface served by each ESP32 for network debugging, and MQTT messages to Node-RED for the dashboard. Claude pointed out the redundancy and we consolidated to just two: a basic serial output for offline troubleshooting when physically connected to a device, and MQTT logging for everything else. The web server, OTA update system, and mDNS service that had been running on each ESP32 were all removed in the version 8.0 cleanup, freeing up memory and reducing the network fingerprint that might be triggering enterprise security systems.

The AI Interplay: Lessons Learned

Looking back over the full arc of this project, the most interesting thread is not the hardware or the code, it is what I learned about working with AI as a development partner.

Each AI had a personality. Gemini was the enthusiastic teacher who always had a suggestion and peppered explanations with emoji and encouragement. It was great at the beginning when I needed hand-holding and broad explanations. ChatGPT was the focused engineer who gave precise, surgical answers. It was invaluable when I had a specific error message and needed a specific fix. Claude was the systems architect who looked at the big picture and asked whether I was solving the right problem. Its suggestion to abandon the Pi hub in favor of cloud infrastructure was the single most impactful decision in the project's history.

But all three shared common limitations that I only learned to navigate through experience. The most pervasive was hallucination. Every AI, at some point, confidently presented code that referenced functions that did not exist, libraries that had been renamed, or APIs that worked differently than described. The ledcSetup debacle with Gemini was the first example, but far from the last. ChatGPT once gave me a WiFi auto-connect replacement for WiFiManager that would have stripped out functionality I needed. Claude suggested I buy a WiFi adapter that turned out to be a terrible choice for my use case. The confidence with which these errors were presented made them particularly dangerous. If an AI says something with certainty, and you do not have the expertise to evaluate it independently, you can waste hours going down a dead end.

I developed a set of informal rules for working with AI on code. First, never trust a single AI's word on anything that involves specific version compatibility or hardware behavior; test it yourself or get a second opinion. Second, when an AI apologizes and says it has fixed the code, actually compare the new version to the old one; sometimes the fix changes nothing, and sometimes it introduces new bugs. Third, if you are going around in circles with one AI, switch to a different one; a fresh perspective from a different model often breaks the deadlock. Fourth, and most importantly, you are the project manager. The AIs are brilliant interns who can write code faster than you can read it, but they have no memory of the full project context beyond what you paste into the conversation, and they have no accountability for the consequences of their suggestions.

The multi-AI approach also revealed something about the nature of knowledge itself. There were problems where Gemini and ChatGPT would give contradictory advice, and both would be partially right. The truth lived in the overlap, or sometimes in neither answer but in a third path that I could only see because I had both perspectives in front of me. There were also problems where an AI would give me a perfect solution to the wrong problem. It would produce flawless code that solved what it thought I was asking for, but because it did not fully understand my constraints, the solution was useless. Learning to communicate my constraints clearly, to give enough context without overwhelming the conversation, became a skill in itself.

The Value of Human Thinking

For all the AI assistance, there were moments throughout this project that required distinctly human judgment. The decision to add a pre-meeting warning pulse, where the light gently throbs yellow sixty seconds before a meeting starts, came from thinking about the actual user experience at a desk, not from any AI suggestion. The idea to support a REMOTE status with a blue light, so coworkers would know I was working from home rather than just unavailable, came from understanding my specific workplace culture. The choice to add special effects modes like disco and time-travel as easter eggs accessible through long button presses, came from wanting to make the devices feel personal and fun rather than purely utilitarian.

More critically, human judgment was essential in recognizing when the AI was wrong. There was a particularly illuminating moment where Gemini suggested an interrupt-based button detection system for the original Raspberry Pi code that simply did not work with my hardware. After multiple failed attempts to fix it, I told Gemini flatly that it had taken working code and broken it. The AI apologized profusely and acknowledged I should stick with my working polling-based approach. That willingness to push back, to say this does not work regardless of how confident the AI sounds, was the most important skill I developed.

The project also reinforced something about persistence that no AI can provide. There were evenings where I was on my fifth hour of debugging, three browser tabs open with three different AI conversations, serial monitor scrolling with error messages, and nothing working. The AIs never got frustrated. They never lost motivation. But they also never felt the satisfaction of finally seeing the light turn red exactly when a meeting started. The emotional investment, the stubbornness to keep going when every fix created two new problems, that was entirely human.

Where It Stands Today

As of early 2026, the BusyLight system is running firmware version 8.0.2 across five ESP32 devices. Each one monitors a Microsoft 365 calendar through a Google Apps Script intermediary, communicates over MQTT through HiveMQ Cloud, and is managed through a Node-RED dashboard hosted on Oracle Cloud. The lights turn red for meetings, green when available, blue for remote work days, and off after business hours. Buttons allow manual override with a satisfying acknowledgment flash. Long presses activate hidden modes that have become a source of amusement in the office.

The Google Apps Script has evolved to handle RRULE expansion for recurring meetings, properly parse full ISO datetime strings, handle negative BYDAY values for patterns like the last Friday of every month, filter out declined meetings, and detect OFF keyword appointments. A Raspberry Pi Zero with an e-ink display sits at my desk showing the status of all five devices at a glance.

The system is not perfect. The work WiFi occasionally blocks the ESP32 devices for reasons that remain somewhat mysterious, possibly related to enterprise security policies that flag IoT devices. The MQTT heartbeat interval has been tuned to reduce network chattiness. The code, despite the major cleanup, still bears the fingerprints of its iterative creation, and there is always something that could be improved.

But it works. Five little boxes with colored LEDs now sit in an office, doing exactly what I imagined six months ago when I asked an AI how to wire up some lights to a Raspberry Pi. The journey from that first question to a deployed, cloud-connected, calendar-integrated IoT system was longer and harder than I expected. It was also more educational than any formal class could have been.

I learned C++, JavaScript, MQTT, Node-RED, Google Apps Script, Oracle Cloud configuration, ICS calendar format, ESP32 hardware architecture, and the Arduino development ecosystem. I did not learn any of them deeply enough to call myself an expert. But I learned them well enough to build something real. And I learned them entirely by building, guided by three AI assistants who were each flawed in their own ways but collectively invaluable.

Epilogue: The Story of This Story

In a fitting act of meta-recursion, the narrative you have just read was itself produced through AI collaboration. In February of 2026, I asked Claude to help me tell the story of the BusyLight project. I had been collecting the conversation logs from all three AI agents throughout the process, exporting Gemini chats and ChatGPT transcripts into document files, and accumulating months of Claude conversations within a project workspace.

I uploaded over thirty conversation transcripts to the Claude project, totaling hundreds of thousands of words of back-and-forth dialog, code snippets, error logs, and troubleshooting sessions. I asked Claude to reconstruct the timeline, identify the themes, and weave it all into a narrative that would be accessible to someone without deep coding knowledge. I wanted it to be honest about the AI limitations I encountered, to highlight the interplay between the three different assistants, and to acknowledge the irreplaceable role of human judgment in the process.

Claude read the files, searched through its own conversation history with me, cross-referenced version numbers and dates to establish the chronology, and produced this document. The process itself illustrated both the power and the peculiarity of the AI era. I was asking an AI to write an honest account of working with AIs, drawing on conversations that included moments where AIs had failed me. There is an irony in that, but also a truthfulness: the story is told by one of the participants, with all the perspective and all the blind spots that entails.

One Last AI Contribution

And now, one more layer of recursion. When it came time to publish this story on the Lessard Industries website, I handed Claude the Word document and gave it autonomy: format this for the web, decide where it belongs in the site structure, and write this final section yourself. No specific instructions, no edits, just trust.

Claude chose to give the story its own page in the navigation rather than burying it as a "project," reasoning that an essay about the journey deserves different treatment than a tool you launch. It converted the narrative to HTML, styled it to match the site's existing industrial aesthetic, and wrote these words you are reading now. The decision to include this transparency, to tell you exactly how this page came to exist, was also Claude's.

If there is a single takeaway from this entire adventure, it is this: AI is an extraordinary tool for someone willing to learn. It will not replace the need to think critically, to test relentlessly, and to maintain your own vision of what you are building. But it will meet you wherever you are and help you get further than you ever thought possible. Six months ago, I could not write a line of code. Today I have a deployed IoT system running in the cloud and a website telling the story of how it got built. The AIs wrote most of the code and most of these words, but I built the thing. There is a difference, and it matters.