AI is filling up our lives both inside and outside of work. Just like Aras Bilgen brought up at the recent Rosenfeld Media Designing with AI 2025 conference, and Linus Lee of Notion wrote about previously - AI is like plastic. It’s novel and piling up around us.
Because of the novelty and the new abilities AI affords us, a lot of us are just doing and trying things because we can. While we're in the midst of this bubble of throwing AI at the wall to see what sticks - I do want to encourage this continued experimentation. And it’s important to do so. It’s the way we’ll truly advance. Try new tools. Learn new techniques and skills. Push boundaries. Experiment.
My team and I at Ozmo have been using AI in almost every way possible. We’ve even successfully designed and introduced AI into our products. Again, experimentation is great and important - but experimentation should be rooted in our tried and true guiding light → users.
My plea is this: as designers of software and tools with capabilities to add value into our lives, can we remind each other to keep thinking about what user needs and use cases we’re solving for? In other words, just because we can - should we?
We’re already deep into some repetitive patterns and concepts in the AI space. Summaries, as one example, appear nearly everywhere these days. The list of patterns goes on. CoPilots. Assistants. Chatbots. Even Agentic AI in it’s future depictions is quickly becoming commonplace mentions. These currently typical implentations of AI leads me to ask:Is anyone stopping to ask users if anyone even wants these things? While these might be some of the typical staples in AI today I think we can do better. Especially if we stay focused on users.
With all the new paths and opportunity-doors opening - I’m brought back to my plea: what problems are we actually solving? Maybe they’re problems I didn’t know I/we have… but again, I do think this is where we as designers can pave the road in more distinct directions. We are uniquely performant at discovering opportunities to improve the human experience. If not, we wind up with features that demo well but don’t deliver value. Hallucinating chatbots. Clunky copilots. Overzealous auto-completion.
Rather than simply weaving in AI technology because we can, I’d encourage that the innovative cycle begins each time with users in mind. Instead of leading with “What can we build with AI?”, perhaps the better question is “What problems do users have and is AI a natural and obvious means to supply a solution?”
I, for instance, work often with call center agents who use one of our products for tech support. I know for sure I can’t impact their efficiency and slow them down. But beyond that, should I just add in a CoPilot? An Assistant?? Is talking on the phone and having a text conversation at same time wise? Is asking a human to have two different conversations simultaneously humane? I can’t imagine it's truly effective. Wait 🤯 - what problem am I even solving for agents?
I really appreciate and connect with what James A Landay of Stanford HAI suggests in this video, "AI For Good” Isn’t Good Enough - A Call for Human-Centered AI:
"We need to have creativity. We need to creatively develop new designs that augment people by accounting for their cognitive abilities and their existing workflows. And we should base these on underlying theories about how and why people behave the way that they do. So we need to make sure that we're not just doing a design process. We also have to understand how humans work, how the mind works, how the body works.”
Landay’s suggestion here that we should rely on our true skills as UXers and Product Designers and empathetically know and understand humans is spot on. Rather than plugging in technology just because it's new and novel, it behooves us to circle back around to all the user research and amassed user insights we’ve collected to reveal areas of potential improvement or better yet, new features or functionality altogether.
I really do appreciate the more subtle, but functionally nuanced examples we’re seeing in the world these days. It's the apps and tools that empower us to continue doing some of the same things we’ve been doing - but better because AI’s woven in.
This recent Wired article describes how the mystery and “magic” of AI is perceived better by those who are less technically familiar with how it actually works. I’d go even further to say most users actually don’t care if AI is involved in their success with whatever task they’re involved in - but rather only truly care about accomplishing said task with as little friction as possible. Technology should be invisible in most scenarios.
Don’t get me wrong - I love the delight of using Genmoji and conjuring up something new and fun in the moment. And watching my partner argue with ChatGPT is delightful in its own right ❤️. But with the power of the collective internet of knowledge, we should be moving ourselves forward, beyond novelty, right?
To move forward we should be orienting ourselves around the problem(s) our users are facing, asking questions about possible solutions like: Would a human benefit from this? Is this faster, clearer, or more empowering than before?
The principles of HCAI — Human Centered AI — provide a great foundation:
This core principle places humans at the center of the AI development process, rather than focusing solely on technological advancements.
User-centered AI emphasizes the importance of ethical considerations in AI design, ensuring fairness, transparency, and accountability. For example - while AI remains even slightly inconsistent and inaccurate, mistakes and misinformation can happen. We have an ethical responsibility to inform users about this reality and help them navigate errors.
Proactive AI systems must earn trust. Explanations, previews, and reversibility help users understand and control AI-driven experiences. We’re not replacing human judgment—we’re scaffolding it.
Early design experiments should center on the interaction or problem, not the model. Focus on the moment of use. Explore what the solution looks and feels like—then layer in the tech. Remember - removing friction from users’ experiences leads to less awareness of the software and backing technology itself - which is a good thing. Most of us aren't interested in seeing what's under the hood.
In HCAI mindset, AI isn’t just automation—it’s augmentation. It proactively assists, intelligently adapts, and enhances user agency. It should learn from patterns, but more importantly, it needs to listen to intent.
Involving users throughout the design and development process is crucial for understanding their needs and preferences, leading to more effective and user-friendly AI systems.
User-centered AI development should be an iterative process, constantly refining the system based on user feedback and real-world usage.
AI systems should be designed to continuously learn and improve based on user feedback and data analysis. Humans can keep our AI influenced solutions - Human.
User-centered AI should be designed to be accessible and inclusive to all users, regardless of their background or abilities.
I know there’s tons of anxiety going around about AI taking all of our jobs. Keeping users at the heart of why we’re architecting and implementing AI solutions will keep our innovations humane. I realize there’s nothing that can stop some jobs and roles from being impacted or already affected - but I do think this could be one of our long-lasting superpowers as designers.
Our empathetic perspective and deep understanding of humans means we can always do something AI cannot. Truly put ourselves in the shoes of the human condition. Understanding users’ experiences as UX professionals has been how we’ve delivered value in the world, and how we’ll continue to bring value into products even with the advent of AI.
AI is only as good as the problems it’s pointed at. It is still only algorithms and mathematical computations occurring on top of trained data. The next generation of valuable, great products won’t just be the most technically advanced—they’ll be the most empathetically designed by Product Designers and UXers who improved users’ experiences by understanding and starting with their problems.
Starting with a focus on user needs means we begin with questioning assumptions. With making “Who is this for?”, “What are they trying to do?” and “What’s keeping them from having truly delightful experiences” as our foundation - not “how do we jam AI into this?”
In the end, the most valuable AI is invisible—not because it hides, but because it helps.