With the proliferation of Large Language Models, we have new opportunities for creative collaboration. However, users often struggle to form accurate mental models of AI in co-creative contexts, which can hinder effective collaboration.
We aimed to address this challenge by designing a tool that could assist in creative processes and facilitate effective human-AI collaboration. The result was a tool we now use in IBM Innovation Studio's co-creation and design thinking workshops.
In July 2022, I began experimenting with GPT-3, prompting it much like participants in design thinking workshops. I discovered that GPT-3 could contribute creative artifacts useful for context-specific ideation. By conversing with AI and transferring its outputs onto sticky notes, we effectively engaged the models in co-creation.
However, initial experiments highlighted issues with inconsistent output quality in group settings. To address this, I developed a framework for prompting AI agents to actively and consistently participate in design thinking with human groups.
I first developed a series of low-fidelity proofs of concept which, based on user input, generated context-specific ideation artifacts to be used as sticky notes in workshops.
To test it, I ran a design thinking workshop with a group of designers from different companies, recording their expectations, misalignments, discoveries, and feedback.
Overall, we found that providing a platform where users input variable sub-prompts enabled quicker and broader ideation, however, users needed freedom to explore ideas outside of a predefined framework.
Based on these findings, the HCAI team and I developed and tested a broader, medium-fidelity collaborative whiteboard interface that generated actual sticky notes from user input:
Initial testing found that a simple "input context -> output ideas" feature was useful for live ideation, neat! However, users wanted the option to utilize existing frameworks, the freedom to explore their own ideation through free-form input, and the ability to reference the existing artifacts.
So for both of these interaction, we landed on the following strengths and weaknesses to guide our design:
Building on initial feedback, I designed a high-fidelity interface with new interaction patterns, clear identification of AI-generated content, a more streamlined interface, and example prompts to guide and inspire users.
Navigation and General UI:
Left Navigation: Added options for home, creating a new board, recently visited boards, favorites, and the user’s collection.
Bottom Interface: Introduced text input, a single sticky note button, and a design thinking template generator.
AI Chat Interface: Enhanced chat for direct AI engagement, allowing sticky note generation within the same conversation.
Enhanced AI Editor:
Enhanced Prompting Panel: Embedded color and text editing, a blank prompt input area, and a menu of predefined prompts.
[AI] Slug and Gradient: Clear identification of AI-generated sticky notes.
Starter Prompts: Prompts to quickly utilize AI in a way that expands creativity, such as suggesting prompts, summarizing, expanding, providing variations, and asking questions
Design Thinking Template Generator:
Starting out with Empathy Mapping with plans to add more in the future, I wanted to create a clean way for users to quickly generate multiple sticky notes in response to a specific design thinking exercise, so I added a feature similar to my initial experiment from earlier, with some extras like:
Generate an example: Implemented a button for users to generate an example empathy map, so they can quickly learn how to use the tool.
Multi-Sticky Prompts:
For example, Merge, which combines ideas into a single cohesive concept or proposal, or Cluster Ideas, which groups sticky notes into thematic clusters based on the content. (Plus it's a very fun thing to say.)
Enhanced Canvas Agent Experience:
The previous UI required switching between a private sticky note interface and a chat interface, so in addition to providing visual consistency and a streamlined chat experience, the Canvas Agent now generates sticky notes all within the same conversation.
For comparison:
Implementing the latest version, showcasing it to clients, and running its capabilities in workshops has proved incredibly effective. We received positive client feedback, leading to increased business and a deeper understanding of AI systems. This project also contributed to higher-rated workshop experiences within our Innovation Studio and the creation of a standard framework adopted across IBM designers.
We also learned a few lessons that should transfer to future Human-AI Co-Creation design pursuits:
AI can be treated as an additional team member, adding divergent ideas that encourage deeper creative reflection.
AI’s unique perspective and generative variability can be used challenge conventional thinking.
Structured templates and automations help users quickly build a mental model of the capabilities and guide user interaction, and free-form prompting helps them go even further.
Implement tools that facilitate users to question, disagree with, and debate AI generated content.
Implementing clear labeling of AI generated content to help users distinguish and divide.
Humans remain the creative drivers in AI interactions.
While Generative AI can synthesize insights from its training data, it does not inherently understand human desires, needs, or goals. AI provides a perspective for humans to observe, reflect on, and use to make informed creative decisions. By implementing these enhanced co-creative interactions within a collaborative canvas environment, we've built a new kind of Meaning Machine interface that has proven useful for ideation, reflection, and action setting.
Contact
Thanks for stopping by.
If you're curious to learn more, or want to work on interesting things together, I'm always happy to talk: