Cincom Eloquence is a B2B enterprise-level customer communications management (CCM) solution. It is used to design and deliver communications, either in bulk or on demand. Most of the customers are insurance and financial services orgs in the US and Europe. There are three main components of the application: Author, Administration, and Interactive. 
AI DOCUMENTATION CHATBOT CASE STUDY
My Role
Lead UX designer for the Eloquence product line.
I work on a team of 2-3 full-time UX designers and 1-2 interns. I collaborate with the Eloquence Product Manager, developers, and other stakeholders
Persona
The Eloquence online documentation assists all users and personas of the application. For this research, we decided to focus on Authors, the business users who design communication templates for formats including print, email, and SMS.
Use Case
Microsoft Word is embedded in Eloquence Author, and most business users understand the basics of Word. In Author however, there is a lot of functionality on top of MS Word. There is a steep learning curve for new users, and even experienced users have difficulty with some of the more technical features.
User Interviews
We knew from past user research that the Eloquence documentation and training materials are difficult for users to navigate. Gaps in the documentation materials and navigation challenges make it difficult for new business users to understand how to use Author to accomplish their business goals. Users from 7 organizations requested more in-app and contextual help, specific guides, and working examples.​​​​​​​
Problem Statement
How might we ensure the more complicated and technical features of the application are understandable to business users?
Strategy
We worked with the documentation team on improving navigation and help topics based on user feedback. They made a lot of progress, but the Eloquence online help system is huge and would need more resources to overhaul completely. Since new resources were not an option, we wondered if an AI-based chatbot could help users by providing a natural way to ask for help, look up concepts, and learn about the system. We also wondered if we could train it directly on Author code to assist with technical features. 
With this research, we wanted to find out three things:
1. Whether the current generative AI models that are available to us would be capable of answering users’ questions accurately
2. What questions Authors would ask a documentation/training chatbot
3. Other ways we could leverage AI to deliver value for Authors
Chatbot Design – Iteration 1
Two engineers from our platform team developed an AI documentation chatbot based on our design. Two models from Open-AI and Meta were tested with internal stakeholders. The final Eloquence Chatbot prototype that we tested with users was a 7-billion parameter large language model (LLM) that was fine-tuned and aligned for chat/assistant purposes (Llama-2-7B-chat from Meta). Engineering trained this base model on the Cincom Eloquence online help documentation and synthetic conversations based on this text.
UX Analysis
In Initial results from internal tests there were a lot of factual hallucinations. Some examples included:
- Making up a function name that sounded plausible based on a description of what the user was trying to do.
- Generating JavaScript or code from another language when it is supposed to create code for the Eloquence Authoring language.
- Claiming that certain features are available, though they are not.
At this point it was clear that the chatbot needed a significant investment of time and resources to improve its accuracy.
We planned to test the prototype with customers at Cincom’s user conference. I had reservations, considering the bot needed training. But there were valuable insights to be gained from users’ interactions with the bot, and we could present it with the disclaimer that it was an untrained prototype. We got approval from the Director of Product, so we went ahead with testing.
User Testing
At Cincom’s semi-annual user conference, 33 users who were familiar with Author interacted with the chatbot. They asked the bot 136 questions. There were 36 correct, 26 partially correct, and 74 wrong answers from the bot. Considering the AI model was not trained extensively, this result was not surprising.
Insights
By collecting and analyzing users’ interactions with the bot, we gained valuable insights into what they would ask and how they felt about the answers. Although it was a relatively small sample size, we could identify patterns in the questions and make recommendations for enhancements to documentation and the app:
- 5 users had questions about best practices for handling incoming date data and expressed confusion about what functions were needed.
- 5 users were interested in digital functionality and wanted to know what's possible with email and SMS templates.
- 4 users asked if they could search for a specific master in a collection and for specific text across multiple components, suggesting a need for more robust search functionality.
Only a few of our users at the conference had previous experience with LLMs. After they interacted with our prototype, the newer Authors felt it would be valuable to have and asked when it would be available. Several users with more Eloquence experience suggested ways to extend AI functionality in Author, and we brainstormed other valuable use cases with the group. ​​​​​​​
Wrap-up
​​​​​​​AI technology should add value for users – it doesn’t make sense to invest in it just to check a box. Our research shows that a documentation chatbot would deliver value for our users. It could also be extended to act more like an assistant in the future as AI capabilities evolve.
Challenges
If we decided to pursue an AI chatbot, the model would require a significant financial investment as well as time and resources to train it. To release the feature to our customers, it should be thoroughly tested so we are confident it is mostly accurate. There will still be inaccuracies, which users need to be aware of and we would need to prepare for. Engineering and UX learned a lot from prototyping and testing AI. In the end however, the business was not prepared to invest the necessary resources.

You may also like

Back to Top