Exploiting the flaws of AI art to create movie posters for my recurring dreams
I have always found the text I see in my dreams to be interesting - it often looks like English, but isn’t. When I first started playing with Stable Diffusion, I noticed that the text it generates is very similar:
A local art gallery was accepting pieces for an exhibition of visual art based on dream imagery. My dreams often play out like movies. This gave me an idea: movie posters for three of my favorite types of dreams:
I wanted to create three movie posters, with three unique styles. The movie posters would be in a portrait orientation, with imagery overlaid with credits and a title.
This presented a challenging problem for Stable Diffusion 1.5 for two reasons:
It is extremely challenging for Stable Diffusion 1.5 to generate accurate text and imagery in a single image. If it were required to generate the movie poster text and imagery at the same time (i.e. with a single image generation prompt), the odds of obtaining a successful image would be quite low. To avoid this problem I took a compositing approach:
I began by deciding what style of movie posters I wanted to create. Based on the differing themes, each dream was best represented by a different movie poster genre:
With these styles in mind, the compositions came naturally:
Next came the prompts to produce the desired imagery. I treated each prompt as the combination of two distinct prompts: one portion for the content, the other portion for the visual style.
After considerable experimentation, I arrived at the following prompts:
black muscle car leaving a trail of fire, driving into the camera, front view, in an empty desert, dark black smoke in the sky, surrounded by flames, watercolor painting, action movie poster
asteroid hitting earth, explosion, debris, apocalypse, destruction, aerial view from above, oil painting, disaster movie poster
a boy basejumping without a parachute with arms stretched wide, wearing sneakers, flying above the surface of the earth, top down view of the back of his head and body, white altocumulus clouds above the surface of the earth visible underneath, bright day, oil painting, fantasy movie poster
From the thousands of generated images I chose my favorites:
I was now ready to generate the text for the movie posters.
With the poster imagery generated, I shifted focus to generating images purely for their text content. This meant I could tailor my prompts to increase the odds that appropriate text (in style and content) was generated.
After some experimentation, I arrived at the following prompts:
grindhouse muscle car movie poster text credits written by directed by
classic asteroid destruction disaster movie poster with text credits at bottom
fly movie poster, cartoon, zany, wacky, large font
From the thousands of generated images I chose the most appropriate movie titles and credits text:
Since I was printing these movie posters for display in an art gallery, I targeted the standard “mini sheet / insert” movie poster size of 11" x 17". Using the standard 300dpi printing resolution, this required the movie posters to have pixel dimensions of 3300px x 5100px. Stable Diffusion 1.5 is not capable of creating images at this resolution, so I needed to upscale the imagery before compositing.
After some experimentation, I found that using R-ESRGAN 4x+ to upscale images, and ESRGAN 4x+ to upscale text worked best.
I could now combine these pieces into the final movie posters.
In Photoshop, I isolated the text from its background, refined the text outlines and colors, and played with various compositions. Once happy, I applied Gaussian noise to the entire image to compensate for any smoothing which occurred during upscaling. This also made the text and background imagery blend more naturally.
The final composites:
The posters were printed and framed:
Then mounted for display at the art gallery:
Those who liked it, liked it a lot: