so that we regain their trust and continuously build a better search output over time?
Since its launch on June 21, 2019, in-app Docs searches have, on average, doubled from 8,000 to 16,000 weekly with a total of over 1 million in-app searches in 2019.
Over 40% of the time, users engage with the search filter after getting initial search results. On a weekly basis, over 60,000 views of the Doc pages come in from Google searches.
Google is now the #1 referrer to IBM Cloud Docs thanks to improved SEO.
IBM Cloud Docs search click through rate is over 60%.
Patent has been filed for the new filter search interaction.
Offering Manager + Architect + Content Strategist: Jenifer Schlotfeldt
Content Operation: Sebastian Fuhrer
Developers: Nathalie Masse + Nick Gong + Raks Padmanabha
Researcher: Mallory Anderson
Designers: Mina Adame + Missy Yarbrough
Primary Skills Utilized
wireframes + sketching
user research support
enterprise design thinking workshop planning and facilitation
Understanding Pain Points
As with any grand adventure, there must be a beginning point. In this case, my product team had been acquiring a multitude of Usabilla feedback around the lackluster search experience within IBM Cloud Docs. Some emerging themes included:
- Phrases vs. Keywords
"I'm looking for something specific, but the more terms I enter in the search box trying to narrow down the results, the more results I actually get. Plus the fact that only a couple of results show on the page at a time, it's quite pedantic trying to go through the search results to find what I'm looking for."
- Too many results
"Searched the documentation for "kubernetes pricing". There are 792 (!!!) search results. And the pricing information for Kubernetes is not even listed in the top 20!!! How you would expect customers to find relevant information with such poor search mechanism?"
"It's incredibly difficult to scan search results for relevant content. Titles and a broken sentence or two really aren't enough. What products am I looking at? Could we at least get a category, like Security or DevOps?"
- Support feedback
There is literally no documentation for VMWare outside of VMWare Solutions (Paas) even though there are multiple offerings in IBM Cloud including IaaS offerings. The searches I ran with - solutions all return 0 results. This means that out of all the offerings only 1 has any documentation at all.
In addition to direct user feedback, Mina and I queried fellow designers to ideate on potential search ideas. Moreover, OM Jenifer Schlotfeldt and researcher Mallory Anderson provided more suggestions based on their competitive analysis, bringing the total of promising features to over 50 ideas.
After assessing each one in relation to user need statements, our design team decided to use 26 of them for KANO survey to find out what feature/functionality users wanted and expected. In conjunction with Mallory's research requirements, I rendered visual assets to complement the survey experience.
50 total users participated in the unmoderated survey. 22 users were recruited through Respondent.io, UserTesting.com, Sponsor Users and personal networks. Also, 28 IBMers participated. While there were features that were of common interest, divergent prioritization appeared most for filter/narrow and view abilities. Mutually, chatbot and feedback cycling were not of great importance.
Our design squad also followed up with 6 remote, 30-minute follow up interviews with IBMers to hear more of their experiences and expectations for search.
Almost everyone starts searching with keywords
Prefer navigating to a section before using search
Many skip the UI (IBM Cloud Docs) all together and often use Slack instead
Just, too many results!
Different languages adds more challenges
Bookmarking has workarounds
After synthesizing the survey and interview findings, we then made ample preparations for remote workshop sessions.
Along with Jenifer Schlotfeld, Mallory Anderson, and Mina Adame, we facilitated an efficient workshop which had a high attendance rate of internal stakeholders that included architects, development, offering management, and content designers. On the first day, our team hosted a knowledge share of our work:
Search best practices
After we reconvened for a following session, we immediately gained alignment and agreement on the roadmap priorities, definition of dependencies, and measurable goals for analytics.
Mina and I ideated low-fi wireframes of search concepts to present to follow-up users in RITE testing. These workflows included boolean operation, filtering functionality, and scoped search. We increased both content and visual fidelity to mid-level for optimized RITE testing with 4 users.
Out of the 26 items assessed for the survey, we successfully launched 9 features that enhanced the search experience.
Filter by category and/or offering
No results view
Scoped search enhancements
What did I learn?
In previous projects, I have had velocity issues incorporating research efforts in parallel to ongoing design work. While I may have been able to execute a small-scale effort on my own, I firmly believe that it is vital to have research's early involvement, especially with so many initial features to consider. With our researcher's devoted knowledge to employing a specific study type, we were able to isolate exact results needed for informed prioritization across multiple stakeholders.
Additionally, the preparation work that we invested into a multi-day remote workshop impacted the productive outcomes positively. There have been mixed experiences with workshops in both my personal history and within the stakeholder experiences. We applied design thinking to identify our desired outcomes, the required exercises needed to achieve those goals, and a specialized workshop facilitator to help moderate conversations (and to promote our direct involvement in the project). I learned how to create more effective workshops by simply applying design thinking.
What would I change?
I would have liked to invest more interaction explorations into the filtering functionality so that it would be less clunky and overbearing on the user. The number of available filtering options far exceeded my initial expectations so I think there's ample opportunity to figure out how to make it more friendly.