AI-Generated Art
In an earlier class, you saw how AI can generate textual descriptions of images. In this class, you will see the reverse, as you use AI to generate images based on textual descriptions.
Preparation
First, watch this short video: “DALL-E 2 Explained.” DALL-E is a family of image synthesis models. Also, read this description of the latest version of the model, DALL-E 3.
Next, read these articles about AI-generated art:
- From toy to tool: DALL-E 3 is a wake-up call for visual artists—and the rest of us (from November 2023)
- With Stable Diffusion, you may never believe what you see online again (from September 2022)
Finally, look at examples of AI-generated art. In some cases, artists combine output from an image synthesis model with traditional digital painting techniques. Note that some images may be NSFW.
Optional: In class, you will create art using an image synthesis model. It will be sufficient to use the web-based “Stable Diffusion Demo,” but you are welcome to use other options. For example, several cloud-based services offer image synthesis models, such as Adobe Firefly, Midjourney, DreamStudio, Imagine with Meta AI, and Replicate. Finally, if you have a PC with a powerful GPU, or a recent Mac or iPhone, you can run Stable Diffusion locally. If you want to try these options, please configure them before class, as they can be time-consuming to configure.
Optional: Consider also reading and watching:
- How We Think About Copyright and AI Art
- AI wins state fair art contest, annoys humans
- AI Image Generation Tested - Revolutionary for Game Devs?
- AI image generation tech can now create life-wrecking deepfakes with ease
- Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model
In Class
Activity 1: AI Art Discussion
In class, we will start by discussing the readings.
Activity 2: Create AI-Generated Art
You can work either individually or with a partner.
Getting Started
To get started, load the “Stable Diffusion Demo,” write a text prompt, and click “Generate image.” After a few seconds, you should see output like this:
Try a few different inputs, and download any that you like.
Note: If you repeat the same prompt, you will get different results – this because the “random seed” for the images changes each time. If you fixed the random seed, you would always get the same images for a particular prompt.
Creating “Good” AI-Generated Art
If your results are like mine, your first attempts might be interesting, but certainly wouldn’t qualify as award-winning. Many artistic decisions are involved in creating compelling AI-generated art. For example:
- Choice of image synthesis model: Popular choices are Stable Diffusion, Midjourney, DALL-E 3 (which powers Bing’s Image Creator), and Adobe Firefly. Imagine with Meta AI is a recent free option based on Meta’s Emu model. Stable Diffusion has an open model, so people have created customized versions designed for specific artistic styles (e.g., to create Disney-styled images). Customized models can also generate images of particular subjects (see: DreamBooth).
- Prompt design: Image synthesis models use a text prompt as a starting point for image generation. Crafting a text prompt that yields “good” results is a process of trial and error, informed by knowledge of relevant keywords present in the training data (e.g., from photography, art history, and more). Negative prompts can also be used to remove undesired features.
- Guy Parsons’s “The DALL-E 2 Prompt Book” can help you develop an intuition for effective prompt design. The examples are especially helpful. Although the book is written for DALL-E 2, the techniques should also work with Stable Diffusion and other models.
- Advanced AI-based techniques:
- Image synthesis models can create images similar to an input image (e.g., “ControlNet” and “img2img”).
- “In-painting” can replace deleted areas of an image (i.e., painting over an area within the canvas).
- “Out-painting” can expand the borders of an image (i.e., painting outside the borders of the canvas).
- Additional edits using other tools: After generating an image, artists can make further edits using image editing software like Photoshop, or using specialized AI-based tools (e.g., ARC Face Restorer).
Spend the rest of class working to create an example of AI-generated art you are proud to share with the class.
Submit
Add a slide to the Google Slides document linked from Canvas. Based on the template slide, include the following information:
- Your AI-generated art
- Your name(s)
- Information about how you generated the art:
- The image synthesis model (e.g., Stable Diffusion, Midjourney, Dall-E, etc.)
- The prompt you used
- Any additional steps you tool to create the art
Learning Goals
Students will:
- Consider ethical issues associated with AI-generated art
- Practice using AI to create art
- Develop oral communication skills