Look up an implementation plan for enterprise software, and “Custom Reports” will show up as a task with significant effort. However, talk to anybody who has gone through these implementations and they will tell you that almost 80% of the custom reports created are hardly used. In fact, most users will not even be aware of the existence of many of these reports.
What explains this paradox? Is it that users are unable to pinpoint their needs and they play it safe by asking for everything they can imagine? Or, are so many reports so poorly thought through that they lose their utility quickly? Turns out that neither of them are primary reasons. Users do go a little overboard with their asks, and of course few reports might have faulty designs too. But, the main reason why so many sagaciously created reports don’t see the light of day is something else – users tend to “settle down” with a few reports that are needed frequently. As for the rest – users either forget how to use them or become oblivious of their presence.
What if users could “ask” for “reports” at any time… not just when analytical tools are implemented. What if they could just air their thoughts and the system could either identify a close preexisting report or create one on the fly.
Even 2-3 years back this would have sounded like sci-fi. No longer! Advances in speech recognition and natural language processing enable users to interact with analytical tools in a more innate way. Further, the speech recognition engine can be made “domain aware” so that it interprets requests in the context of the business/industry rather than general speech. That means users can talk to the analytical tool as though it is a colleague instead of searching for the “mot juste” that computers will understand!
The implications of analytics being so much more accessible will be profound. Many sparse users and non-users will jump in and benefit immensely. The busy ones, the lazy ones, the road warriors, the technophobes, the senior management – pretty much everybody in the organization (maybe barring the analysts) will rely more on analytics for their decision-making.
While most speech applications still focus on the consumer, enterprises are increasingly looking at speech interfaces as a bridge to take existing computing capabilities to new users. At OpsVeda, our research suggests that one of the first such bridges will be for analytical applications. Imagine a conversation like the one below
Business Manager: How are December sales this year compared to 2016?
— It is up 30%.
— 27% of this growth is coming from two new customers – TajChin Network and Burlington Stores.
— It could have been better. Orders worth $8.3 million were canceled due to insufficient stock.
— Do you want to see breakdown of these cancellations?
Well-known computer scientist Andrew Ng had remarked, “As speech-recognition accuracy goes from 95% to 99%, we’ll go from barely using it to using all the time!” The increasing popularity of Siri, Alexa, Cortana and Google voice assistant proves Andrew’s point. However, his prophecy is likely to be truer in the enterprise because of the complexity of communication.
As individuals our interaction with Alexa and its siblings are often with the intent to ‘check them out’, rather than attain specific outcomes. In the enterprise, the business user will only have specific questions that have consequences… Show me all orders for customers X & Y for Product line A, Shippable in next 3 days, with fill rate lower than 85%. Now that’s a problem worth answering since $$$ are at stake. And yes, it is less about Speech to Text… it is more about Text to Domain Aware Queries and Cognitive Skills.
At OpsVeda, we believe that the initial successes of such Assistants will spur more usage and that in turn will make the system even more accurate in interpreting user requests fueling exponential adoption. That means a new wave of efficiency is round the corner because so many more decisions will flip from “gut-feel” to data-driven.