Within just a single prompt, I was able to get some functional logo making demo with exporting options and all: https://gist.github.com/jddunn/48bc03f3a9f85ffd8ccf90c801f6cf93.
This excerpt shows LLM “generating” the correct links for fonts (as well as other dependencies like https://cdnjs.cloudflare.com/ajax/libs/gif.js/0.2.0/gif.worker.js) in line 869, and starting the in-line CSS for styles for the logo creator to apply via UI selection, and an excerpt of the exporting logic. Even the latest SHA hash of a linked CDN library is intact.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Logo Generator</title>
<!-- Extended Google Fonts API -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;700;900&family=Audiowide&family=Bungee+Shade&family=Bungee&family=Bungee+Outline&family=Bungee+Hairline&family=Chakra+Petch:wght@700&family=Exo+2:wght@800&family=Megrim&family=Press+Start+2P&family=Rubik+Mono+One&family=Russo+One&family=Syne+Mono&family=VT323&family=Wallpoet&family=Faster+One&family=Teko:wght@700&family=Black+Ops+One&family=Bai+Jamjuree:wght@700&family=Righteous&family=Bangers&family=Raleway+Dots&family=Monoton&family=Syncopate:wght@700&family=Lexend+Mega:wght@800&family=Michroma&family=Iceland&family=ZCOOL+QingKe+HuangYou&family=Zen+Tokyo+Zoo&family=Major+Mono+Display&family=Nova+Square&family=Kelly+Slab&family=Graduate&family=Unica+One&family=Aldrich&family=Share+Tech+Mono&family=Silkscreen&family=Rajdhani:wght@700&family=Jura:wght@700&family=Goldman&family=Tourney:wght@700&family=Saira+Stencil+One&family=Syncopate&family=Fira+Code:wght@700&family=DotGothic16&display=swap" rel="stylesheet">
<style>
:root {
--primary-gradient: linear-gradient(
45deg,
#FF1493, /* Deep Pink */
#FF69B4, /* Hot Pink */
#FF00FF, /* Magenta */
#FF4500, /* Orange Red */
#8A2BE2 /* Blue Violet */
);
--cyberpunk-gradient: linear-gradient(
45deg,
#00FFFF, /* Cyan */
#FF00FF, /* Magenta */
#FFFF00 /* Yellow */
);
--sunset-gradient: linear-gradient(
45deg,
#FF7E5F, /* Coral */
#FEB47B, /* Peach */
#FF9966 /* Orange */
);
--ocean-gradient: linear-gradient(
45deg,
#2E3192, /* Deep Blue */
#1BFFFF /* Light Cyan */
);
--forest-gradient: linear-gradient(
45deg,
#134E5E, /* Deep Teal */
#71B280 /* Light Green */
);
--rainbow-gradient: linear-gradient(
45deg,
#FF0000, /* Red */
#FF7F00, /* Orange */
#FFFF00, /* Yellow */
#00FF00, /* Green */
#0000FF, /* Blue */
#4B0082, /* Indigo */
#9400D3 /* Violet */
);
}
..<body>
<div class="container">
<header>
<h1>Logo Generator</h1>
</header>
<div class="controls-container">
<div class="control-group">
<label for="logoText">Logo Text</label>
<input type="text" id="logoText" value="MagicLogger" placeholder="Enter logo text">
</div>
<div class="control-group">
<label for="fontFamily">Font Family <span id="fontPreview" class="font-preview">Aa</span></label>
<select id="fontFamily">
<optgroup label="Popular Tech Fonts">
<option value="'Orbitron', sans-serif">Orbitron</option>
<option value="'Audiowide', cursive">Audiowide</option>
<option value="'Black Ops One', cursive">Black Ops One</option>
<option value="'Russo One', sans-serif">Russo One</option>
<option value="'Teko', sans-serif">Teko</option>
<option value="'Rajdhani', sans-serif">Rajdhani</option>
<option value="'Chakra Petch', sans-serif">Chakra Petch</option>
<option value="'Michroma', sans-serif">Michroma</option>
</optgroup>
<optgroup label="Futuristic">
<option value="'Exo 2', sans-serif">Exo 2</option>
<option value="'Jura', sans-serif">Jura</option>
<option value="'Bai Jamjuree', sans-serif">Bai Jamjuree</option>
<option value="'Aldrich', sans-serif">Aldrich</option>
<option value="'Unica One', cursive">Unica One</option>
<option value="'Goldman', cursive">Goldman</option>
<option value="'Nova Square', cursive">Nova Square</option>
</optgroup>
<optgroup label="Decorative & Display">
..
<script>
..
// Load required libraries
function loadExternalLibraries() {
// Load dom-to-image for PNG export
var domToImageScript = document.createElement('script');
domToImageScript.src = 'https://cdnjs.cloudflare.com/ajax/libs/dom-to-image/2.6.0/dom-to-image.min.js';
domToImageScript.onload = function() {
console.log('dom-to-image library loaded');
exportPngBtn.disabled = false;
};
domToImageScript.onerror = function() {
console.error('Failed to load dom-to-image library');
alert('Error loading PNG export library');
};
document.head.appendChild(domToImageScript);
// Load gif.js for GIF export
var gifScript = document.createElement('script');
gifScript.src = 'https://cdnjs.cloudflare.com/ajax/libs/gif.js/0.2.0/gif.js';
gifScript.onload = function() {
console.log('gif.js library loaded');
exportGifBtn.disabled = false;
};
gifScript.onerror = function() {
console.error('Failed to load gif.js library');
alert('Error loading GIF export library');
};
document.head.appendChild(gifScript);
}
// Export as PNG
exportPngBtn.addEventListener('click', function() {
// Show loading indicator
loadingIndicator.style.display = 'block';
// Temporarily pause animation
const originalAnimationState = logoElement.style.animationPlayState;
logoElement.style.animationPlayState = 'paused';
// Determine what to capture based on background type
const captureElement = (backgroundType.value !== 'transparent') ?
previewContainer : logoElement;
// Use dom-to-image for PNG export
domtoimage.toPng(captureElement, {
bgcolor: null,
height: captureElement.offsetHeight,
width: captureElement.offsetWidth,
style: {
margin: '0',
padding: backgroundType.value !== 'transparent' ? '40px' : '20px'
}
})
.then(function(dataUrl) {
// Restore animation
logoElement.style.animationPlayState = originalAnimationState;
// Create download link
const link = document.createElement('a');
link.download = logoText.value.replace(/\s+/g, '-').toLowerCase() + '-logo.png';
link.href = dataUrl;
link.click();
// Hide loading indicator
loadingIndicator.style.display = 'none';
})
.catch(function(error) {
console.error('Error exporting PNG:', error);
logoElement.style.animationPlayState = originalAnimationState;
loadingIndicator.style.display = 'none';
alert('Failed to export PNG. Please try again.');
});
});
..
The full ~900 LOC working script was created with Aider and GPT-4o model. Aider was planned to be used originally, but the latest versions had worsened functionality / accuracy than interacting with the same models in the UI, so I switched to just using web UIs for prompts. 20$ monthly plans, no Extended Thinking
or Research
mode features. But even so..
Consistency of use is an issue in all LLMs (often corresponding directly with alignment), whether we make the decision on interacting with them via an app, or website, or API, or third-party agent.
Taking Aider’s code (from the gist) and sending to Sonnet 3.7 kindled a 2 hour project becoming a 2 day project becoming a 10 day project.
Let’s see how far Claude can take the original code we have and enhance it.
Claude says, say “continue” and it’ll work. Will it? (Hint: It didn’t for OpenAI’s GPT-4o models oftentimes, but Anthropic’s UI is king, right now).
.. we continue..
Asking Claude (Sonnet 3.7) to expand and improve, we were left with almost 2x LOC. Brilliant. Except it doesn’t compile because it’s not finished so we can’t use it. And despite what Claude says we can’t continue with the line (“continue”) / variations.
Claude simply loops rewriting the beginning script.
We know Claude can context window 100–200k tokens, but that seems to only be in Extended Mode. So what does this “continue” button even do? And what is this “Extended Mode”?
I’m forced into that since the continue
prompt doesn't work? Which is more expensive (just call the button Expensive Mode
) surely. Is it summarizing my conversation? Is it using Claude again to summarize my conversation (ahh)? Is it aggregating the last 10 or so messages or however many until it reaches a predetermined limit (and how does it determine this limit, is it limiting my output window size, thus suppressing my ability to use Claude for pair programming?)?
Outputs for LLMs are typically capped at 8,192 tokens, which is standard (and arbitrary, one that can be extended by these respective LLM providers, and oftentimes is). The context windows are the same, hardcoded limits.
If you’re asking why so many context windows are increased to a 6 figure limit (supposedly) while the output limit is capped at 8,192 consistently, you’re sparking discussions that are in ways more interesting than existential singularity-related thought experiments.