AI-human interaction

As Artificial Intelligence (AI) continues to advance, it brings both exciting opportunities and new challenges for product design. Although designing for AI still requires adhering to human-centered design principles, additional considerations such as ethics, privacy, trust, and transparency must be taken into account.

This page is divided into two main sections: Guidelines and Framework for AI.

Guidelines

Start with the user, not the technology

AI technology should be leveraged to enhance the user experience, rather than be the primary focus. Design with a deep understanding of the user's needs, goals, and pain points. If you aren't aligned with a user's need, you are building a system that does not solve a problem. Instead of asking "Can we use AI to _____?", ask yourself "How might we help users _____?".

Understand when to automate

Understand if a task is a good fit for AI or if it is better done by a human. First, understand if a user's need will be helped by automation. Users may not want automation in high stakes tasks where they will be held responsible for the result, or tasks that they enjoy doing. Tasks that are a good fit for automation are tedious, error-prone, boring, low stakes, and free up the user's time. If a user benefits from automation, consider if the problem could be addressed with pre-defined rules (if this, then that). Understand the strengths and weaknesses of AI. AI can be helpful for processing large amounts of information, pattern finding, prediction, classification, and recommendations. Given good training data, AI can be more accurate and faster than a human at completing tasks. AI is less helpful for tasks requiring empathy, emotional intelligence, morality, common sense, predictability, contextual understanding, intuition, and creativity.

Understand risk

Understand the risk of an AI-assisted feature by assessing the probability and impact of an incorrect recommendation. In a high stakes situation, the risk of negative consequences can be high. To mitigate risk in high stakes situations:

  • Clarify the system's limitations and how much the user can trust its recommendations. For example, consider showing a detailed disclaimer such as “Content generated by AI should be seen as a starting point and verified before use. It may be incorrect, inappropriate, or diverge from your organization’s standards.” Or, if space is a concern, just “Verify before use.” See the related section, Set the right expectations.
  • Design for potential negative impact. For example, a user should explicitly opt in to a high stakes AI-assisted feature.

Communicate confidence

Users rely on the system to make decisions, but they should not trust the system entirely. Communicating confidence allows users to know how much scrutiny they should put recommendations under.

Be transparent

Establish trust by ensuring the user always knows when they are interacting with AI, and when content or recommendations come from AI. Such disclosures are often required by third party AI services and may soon be required in the European Union (EU AI Act).

Name

To communicate the suite of AI capabilities and identify specific AI-assisted features, use the GitLab Duo name. It's an extension of our brand that acts as an “umbrella” for all AI capabilities across GitLab. For variations of the GitLab Duo name, such as features or add-on, see the technical writing word list.

  • Show the “GitLab Duo” name at least once per AI-assisted feature. The name can be shown before or after user interaction.
  • A call-to-action can optionally have the “GitLab Duo” name in its label, if reasonable. For example, “Ask GitLab Duo” or “Tell GitLab Duo what you're looking for…”

Disclaimer

  • Flag AI-generated content with the passive voice disclaimer <Verb> by AI. For example: “Generated by AI” or “Summarized by AI”.
  • Show the disclaimer only once per context, and preferably under the AI-generated content, in a way that is clear that it applies to all content within that context.

Icon

In the UI, use the tanuki-ai icon as the visual identifier for GitLab Duo.

  • Show only one icon per context. For example, use only one instance of the icon in the header of a list or table, and not multiple instances for each child item.
  • The icon is preferably shown before the user interacts with the AI-assisted feature. For example, in the button that triggers the action. However, the icon can be shown after user interaction, if more appropriate.

Illustration

The tanuki-ai-md and tanuki-ai-sm illustrations serve the purpose of promoting and associating AI related features within the UI.

The illustration is currently a work in progress, and its final version is still in development.

Color

In the UI, there is no specific color associated with AI or GitLab Duo — this differs from marketing, that has specific colors for the GitLab Duo visual identity. The color of the icon or actions of AI-assisted features follow the component-specific guidelines, like button variants.

Set the right expectations

The interface should clearly communicate AI capabilities, limitations, and the scope of its decision-making authority. Users need to understand a system's capabilities and limits to understand how much trust to put into the system. To help the user build a mental model of the system:

  • Clearly highlight if a feature is an Experiment or Beta.
  • Follow the disclaimer guidelines.
  • Use clear, simple language to explain what the system is doing and how it arrived at its recommendations.
  • Explain what data the system is trained on and what it's optimized for.
  • Tell the user how their data is used and processed.

Give the user control

The user should be able to decide whether to follow the AI's recommendations or not. There should be an easy way to undo system actions. Do not collect user data without asking the user's permission.

Fail gracefully

When your system is not certain of the user's intent or has low confidence, make sure there is a path forward that does not rely on AI. Explain why the system was not able to provide a recommendation. Errors are also opportunities to learn more about your user's mental models and improve the system's ability to make recommendations. Consider designing a feedback mechanism that presents as a cue for adjustment rather than an error state.

Encourage feedback

Design mechanisms to collect implicit and explicit feedback to improve the system.

Framework

To help you put the guidelines into practice, the framework materializes them into standard patterns that address the most common UX challenges. Follow the progress in the framework epic.

Dimensions

These dimensions can assist you in choosing the most appropriate pattern for the problem you are solving.

  • Mode: What's the emphasis and persistence of the AI-human interaction relative to the main context and the user journey?
    • Focused: AI is the main context, with a dedicated focus.
    • Supportive: AI complements the main context and accompanies users along their journey to help them achieve their goals.
    • Integrated: AI is blended into specific moments of the users flow to help them complete small, discrete tasks.
  • Approach: What should AI focus on improving?
    • Automate tasks: improve efficiency by replacing human decision-making and actions, always done with human awareness and consent.
    • Augment capabilities: improve effectiveness by supporting and improving human decision-making and actions.
  • Interactivity: How does the system surface AI to engage with the user?
    • Proactive: triggered without user interaction.
    • Reactive: triggered by user interaction.
  • Task: What's the user task that AI can assist with?
    • Classification: categorize, suggest, rank, match.
    • Generation: summarize, explain, create.
    • Prediction (or regression): forecast continuous, non-categorical data, like numerical values.

Patterns

TODO:
Add documented patterns. Follow the progress in the framework epic . Create an issue

While we don't have documented patterns, we share some potential patterns in this video (slides and internal Figma file).

As inspiration for integrated mode patterns, you can find some explorations in this Figma file:

References

Last updated at: