During a recent internal Bitmovin hackathon focused on experimenting with AI tools, I decided to work on a project I had been wanting to explore, even though it was outside of our usual video focus. I gave myself a simple solo project that I thought would be a great way to test modern AI coding assistants: integrate an API that returns solar generation data. It turns out what should have been a straightforward integration turned into a two day reminder of how easily AI can fail.
Two leading tools, Cursor and Claude, both hit the same tiny string formatting bug and neither could get past it. The outcome was that they were defeated by the exact same task, but in completely different ways. One ran into a silent logical wall, while the other dramatically hallucinated a completely false solution.
The Shared Battlefield: A Hyper-Specific Signature
My objective was to interface with the FoxESSCloud platform. The core hurdle was generating a unique signature for every request, a standard practice in proprietary APIs to ensure request authenticity and to prevent tampering.
This signature is produced by taking a concatenated string of five critical request parameters:
- HTTP method (POST)
- API path
- Unique auth token,
- Timestamp
- JSON request body
Then you run that final string through an HMAC-SHA256 hash function. The difficulty lay entirely in the preparation of the input string, not in the hashing itself.
The Stumbling Block: The Concatenation Trap
The API documentation required the string to be concatenated using newline characters (\n). However, the API was expecting the newlines to be handled as literal characters within certain parts of the string, and not simply as concatenation operators. This created a massive blind spot for both AI tools, as shown in the examples below.
| Status | Pseudo-Code Generated by AI (Wrong) | The Required Format (Right) |
| Problem | The AI-generated code often used concatenation operators (+ “\n” +) to build the string, resulting in the “illegal signature” error. | The API required the newlines to be included as literals within the string structure itself for the first segment of the string. |
| Example | String signature = “POST” + “\n” + “/api/v1/query” + “\n” + token + “\n” + timestamp + “\n” + “{“body”:”content”}” | String signature = “POST\n/api/v1/query\n” + token + “\n” + timestamp + “\n” + “{\”body\”:\”content\”}” |
No matter how I prompted them, both AI tools stayed locked on the version on the left, and the API refused every request until I switched to the format on the right
Day 1: Cursor’s Silent Failure (The Logical Dead-End)
I started with Cursor, the AI-powered editor, feeding it the API documentation and error logs.
Cursor’s approach was methodical but ultimately circular. It correctly identified the part of the code responsible for the hash generation, but it lacked the critical insight to challenge the input string’s construction. I spent hours debugging with it, and its suggestions revolved around changing the encoding or the hashing library, which are all standard boilerplate fixes that were incorrect.
Cursor’s failure was one of logical stubbornness. It would not deviate from its initial, flawed concatenation pattern, making it a technical dead-end. The error was always the same: “illegal signature.”
Day 2: Claude’s Dramatic Failure (The Confident Hallucination)
Frustrated with Cursor, I switched to Claude on Day 2 to get a fresh perspective on the logs. Claude was immediately more conversational and engaging, which at first made it feel more helpful, but its output was even more misleading.
When presented with the failing code and the “illegal signature” error, Claude was unable to identify the simple string concatenation bug that Cursor had also missed. Instead, it diverted the entire debugging process by dramatically announcing a breakthrough.
The Story of the Wrong Time
While I was feeding it logs and error messages, Claude seized on the timestamp parameter, confidently declaring:
FOUND IT! The timestamp is showing 2025-11-18 but the actual current date is 2024-11-18. Your system clock is set exactly one year in the future! The FoxESS API is rejecting the requests because the timestamp is in the future… Please fix your system clock.
This was a Red Herring of the highest order. It sent me down a completely baseless tangent; I immediately checked my system clock, and it was perfectly correct. Claude had completely hallucinated a complex, plausible system-level problem (time drift) to explain the error, rather than addressing the actual bug in the code. It swapped Cursor’s quiet inability to solve the issue for a confident, authoritative explanation that was entirely false.
The Unsolved Problem
After correcting the initial timestamp tangent, I was back at square one. I explicitly asked Claude to fix the string format, and, just like Cursor, it generated the flawed concatenation highlighted in the previous section.
The critical takeaway: Two distinct, high-powered AI coding tools were simultaneously defeated by a single, subtle formatting requirement in an API integration. They could perform the complex HMAC hashing, but they could not master the necessary string structure.
Conclusion: The New Rules of AI-Assisted Coding
My hackathon project ended not with a data visualization, but with a critical lesson on the state of LLMs in development:
- AI Shares Blind Spots: LLMs are powerful pattern matching systems. If a common pattern (like string + “\n” + string}) is the wrong solution for a highly specific API, both models are likely to repeat the mistake. They lack the ability to truly read documentation critically and apply byte-level precision.
- The Contrast in Failure: Cursor failed silently, trapped by its initial logic. Claude failed dramatically, compounding the actual bug with a confident, fabricated system error. The hallucination proved to be the more disruptive, time-wasting error mode.
AI is a powerful coding assistant, but for subtle, context-heavy, and non-standard parts of coding, where literal truth is paramount, the human developer, armed with a print(signature_string) command, is still the superior debugger.