Tim Paul

AI-assisted poster and sticker design

30 Nov 2025

This is the first in what I hope will be a series of posts that explore different ways we could use AI to help us do design and user research in the public sector.

This first experiment; using AI to help design some posters and stickers, is a nice, low stakes opportunity to try something out. If it doesn't work, well no big deal.

The opportunity presented itself recently in the Incubator for AI, where I work. We'd recently developed a set of 5 values to help guide our work and reflect our culture.

We all agreed it would be nice to create some ways to remind ourselves of them - posters, stickers, animated GIFs, that kind of thing.

Corporate values can be pretty cringe, but the team who led the work had made the process was really collaborative, and so the results were good.

Here they are:

  1. Experiment, Learn, Repeat: We build with bold hypotheses, embrace failure and adapt as we go.
  2. Challenge Ideas, not People: We challenge ideas, including our own, with kindness and respect, to sharpen our thinking and build better products.
  3. Build for Impact: We obsess over finding applications that have the potential to unlock the greatest public value.
  4. Default to Open: We practice radical transparency, to accelerate progress and adoption.
  5. Set the Standard: We hustle with heart and deliver work that sets the standard high for AI in the public sector.

A few of us had done some inital sketching and ideation on paper and in Figma. But the truth is we all too busy to dedicate enough time to see the work through.

I'd gotten as far as drafting the following poster ideas:

I wanted anything we produced to be able to sit alongside other posters produced over the years by the government design community.

But as you can see, my graphic design skills are pretty middling, especially when compared to some of the seriously talented designers past and present in GDS.

The release of Google Gemini 3 prompted me to consider whether AI could help me push things forwards. What did I have to lose?

Phase 1: AI ideation #

1st iteration #

I initially fed those 5 poster images into Gemini and asked it to iterate them in the 'Swiss style'; clean lines, bold typography, negative space, strong grids etc..

Here's what it produced:

Actually pretty good. I would say that it broadly met the brief.

The backgrounds were bolder, the all caps Helvetica and visible grid was a (non too subtle) nod to the Swiss style.

It opened my eyes to other design options - like the fact that the paragraphs could have slightly different layouts but still feel consistent.

2nd iteration #

I was going to stop there, but decided to see what it did if I gave it the same prompt but didn't provide any reference images.

Here's what it did:

Damn. These are actually better than my initial ideas.

Two things really impressed me:

For example:

3rd iteration #

The results so far had been impressive, but Gemini had rendered each idea in a raster image format like PNG. I wondered, could it generate vector images?

I tried the prompt again, but asked it to generate the results as an SVG object inside a HTML file, which it duly did:

Gemini had no trouble coding SVG files, but the actual images themselves were much less accomplished.

I did however really like the visual concept behind 'Set the Standard'. Less needlessly phallic than the skyscraper metaphor.

Phase 2: Human synthesis and implementation #

I had enough ideas and material now to jump back into Figma and create my own interpretations, synthesising the best of the different ideas.

Here's where I ended up:

I did a fair amount of detailed implementation work in this phase; establishing a visible 12 column grid to align objects to.

Once I'd finished I used the poster designs as the basis for these digital and print friendly sticker designs:

Phase 3: Will it animate? #

At the beginning of the project we'd discussed the idea of creating animated versions of the images, that people could use in Slack or in slide decks.

I wondered if Gemini could help here, so I exported the square sticker designs from Figma as SVGs and then opened them in a code editor.

I was able to then copy and paste the code for each SVG into Gemini and ask that it animated each one.

Initially my brief was very simple:

Here's what it produced:

Superficially this was really impressive to me; that it was able to isolate the right parts of the SVG code and then apply CSS animations to them.

But I had a feeling that it could do much better if I gave it more specific directions.

So, I repeated the process, but this time I used the circular sticker designs, and gave detailed instructions for how to animate the elements of each one.

Here's what it produced:

In each cases there was usually a bit of post-production required. Either it had forgotten to animate an element, or I needed to tweak the timings a little.

But CSS animations are relatively easy to understand, and you can play around with them in the inspector - the iterative loop is nice and short.

Reflections #

I think it worked #

I'm so used to seeing that default Gen AI visual aesthetic of maximalist gloss and surface high production that I'd had my doubts that Gemini would be able to reproduce the clean lines and minimalism of the Swiss Style.

But of course, it could - I just had to ask it to.

It felt weird at first #

I'll admit - I felt pretty shameless initially. It takes some getting used to, delegating your work to a machine like this. At various points I got that vertigo feeling you get when you see something uncanny.

I did not feel alienated from the outputs #

I'd worried that I would feel no connection to the outputs, but despite the really significant contributions from Gemini, I still did.

I think this is because I was driving the overall process, making judgements about which ideas to use and doing detailed, manual post-production work.

I like this idea of AI as coach and sparring partner.

I think I learned stuff #

I also worried that this process would be infantilising, or result in skills atrophy and dependency on AI.

I still think that could happen, but if you go in looking for opportunities to learn, they are there.

Throughout the process Gemini was rationalising it's decisions with detailed notes, many of which were genuinely interesting.

The process pushed me to do better than I would have on my own, and I've learned more about CSS animations than I was expecting to.

I'm still conflicted by the ethics #

I think about the unpaid labour from graphic designers that went into the training data for Gemini and other similar LLMs.

And I think of the graphic designers who might not find work because tools like this exist.

I don't think that this specific use case was particularly problematic, but only because it was low risk and low stakes.

I'd be keen to explore the processes described here, but using models that have only been trained on copyright free data, or where creators have been fairly compensated for their work.

Ultimately, I think we as individuals need to decide what they are comfortable with here.


Tim Paul