“Soon we will show the market new interesting AI- and voice technology implementations”
Why does BSS consider that all banks need to use voice technology, but in-house competencies should not always be developed. Why is it more profitable to outsource this task, and what will a bank receive along with an industrial solution. Read about these and other aspects of AI and voice technology development in an interview with Georgy Kravchenko, CEO, BSS.
40% of companies do not use artificial intelligence in their work and do not consider its implementation. This is the data of a VTsIOM social research on the use of artificial intelligence and solutions based on it by Russian business. Why do you think this happens? What are the problems with AI spreading in the Russian Federation?
In fact, these figures mean that 60% of Russian companies either already use artificial intelligence and solutions based on it, or are thinking about implementing it, which is not just good, but excellent. Again, I think, there are no specific restrictions in the willingness of companies to use artificial intelligence and machine learning in comparison with the rest of the world in Russia. These challenges are of the same nature and are similar in all countries, including the Russian Federation.
Besides the organization’s own readiness to use AI, the main question is which area these technologies can create additional value for a company in. The next question is whether the company has data for AI training. Typically, the availability, quality, and complexity of data preparation are serious constraints that really limit many applications.
The third question is the availability of qualified specialists who are able to ensure the development of these technologies. There are still few people with the appropriate education in the market, and there are even less experts with experience of successful solutions, seems to be very few.
And, of course, the budget is another important issue in this topic, which for many may become a limitation. Self-implementation of AI involves a large amount of research, design, customization, training and development. How many companies are ready for such an investment scope?
Today we see that most successful examples of the use of AI and machine learning are implemented by large companies basing on their in-house R&D teams. As a rule, these are corporations that have access to a huge amount of data and the ability to find and hire highly qualified specialists, and can successfully fund all that.
Nevertheless, despite these limitations, the practice of using AI and machine learning is expanding, and new companies are being drawn into the orbit of these technologies. AI-based solutions are spreading to new industries. The number of successful cases is increasing. Standard application cases appear. The number of specialists in artificial intelligence and machine learning is growing.
All that contributes to the spread of technology, its transition into the category of more accessible and interesting for a wide range of businesses. The market of companies specializing in the implementation of AI and machine learning technologies is developing. The market is growing rapidly. It is dynamic, volatile, and opens up new opportunities. Industrial solutions in this area already exist, and they are considerably more preferable than in-house development. I expect that the share of the latter will decrease over time.
The absence of "clear cases" proving the effectiveness of AI is called among the main reasons limiting the spread of AI. It is known that BSS is actively developing voice solutions based on artificial intelligence. Do you have any such cases? Would you share your implementation experience?
There are many technology application options. There are quite obvious ones. For example, the introduction of several voice assistants in companies and organizations of various industries is on our account. Here are two of the most striking examples. The first one is a solution for automation of a contact center in BPS-Sberbank (Republic of Belarus) — Alesya virtual assistant. It answers clients’ questions 24/7 on more than 20 topics. And the area of Alesya’s responsibility is expanding.
The second example is the MFC of the Novosibirsk Region, for which we made a voice assistant — Nikolay digital intellectual civil servant. This neurobot consultant processes up to 70% of incoming calls at the MFC contact center, which made it possible to reduce the duration of a standard call by 3 times, as well as to reduce the waiting time for operator response. By the way, it won the Project Leader Prize in 2019.
I do not say that these solutions are unique. Similar technologies are used around the world for self-service or for communication with clients. Their appeal and relevance are that they really lead to lowered costs and increased customer satisfaction. If everything is clear with the first one, then the second one is worth explaining. Firstly, it is much easier to provide a consistently high level of service during normal and peak hours using a robot. Secondly, a robot, unlike a human, never loses its temper, does not make mistakes, does not get tired, is not distracted and does not get sick. It is always resistant to stress, polite, tactful, friendly, and persistent. The perfect helper. However, a modern voice assistant cannot compete with a human operator beyond the scope of its specialization.
But the systems are self-learning, they learn over time. Anyway, it turns out that humans are smarter so far?
It is AI’s learning capability that makes this technology so promising. However, I would not use the concept of "mind" in relation to artificial intelligence. For example, knowledge of office hours is an indicator of awareness, but not of mind. I think it’s more correct to say that human operators are much more diversified.
AI only knows what it was taught. You need to consider it while testing and using it. I am confused about attempts to "break" a robot, asking it ridiculous and unexpected questions that clearly go beyond its tasks. Well, it turns out that there are questions that the robot is not ready to answer. Why than doesn’t anyone ask riddles or questions in the spirit of What? Where? When? show him- or herself or a human operator of a contact center. There will always be questions that such a "tester" cannot answer as well.
The effectiveness of AI needs to be verified in the conditions for which it is intended. Test its ability to solve the target tasks. Use examples of natural speech from the real practice of human operators for this, and then you will see what it is actually worth.
AI can be more effective than humans in its narrow subject area. It can easily "recall" the contents of the previous conversation, provide information faster, will not be mistaken in facts, will not break the conversation because the working day is over.
What tasks should a developer solve when creating AI-based solutions?
If we are talking about a voice assistant, then the first task facing it is to figure out what the dialog partner wants. Sometimes is not so easy to do that. Especially when the dialog partner is under stress. Moreover, its task does not end with the conversion of what was said by voice into text. To solve the task, it needs to understand what kind of problem the dialog partner is talking about, to understand what information relevant to this problem has already been received, and what data is not yet enough. At the same time, the subject of the conversation, and often the context in which a particular word is pronounced, must also be understood.
In summary, the task is not easy. The cost and quality of its solution depends on the technologies used. For example, companies using publicly available, or non-adapted speech recognition models, are forced to focus on adjusting the recognized text and on its subsequent analysis.
We use our own speech recognition technology. We adapt and train the system on client data in such a way that it recognizes speech more accurately than generally available models by understanding the subject matter of a particular inquiry and taking into account the context. This gives us an advantage even when solving the problem of recognizing free speech, not to mention the advantage in the accuracy of classification of calls.
And the second task?
It is to maintain the dialogue in automatic mode. The easiest approach is to program the system to talk in several, but fixed scenarios. Provided that the first task has already been solved, it seems easy to do that. Everything would be so if clients would gladly follow your small set of scenarios when solving their problems.
But the reality, as always, makes its changes. There is a need to implement new conversation scenarios on the same problem for the convenience of a user. Cases are identified that are not yet foreseen by the scenario. As the number of topics increases, the need arises to provide cross-topic transitions. In short, what requires almost no effort at the beginning of the path, begins to require serious human resources as it develops. I know companies in which more than fifty people have been engaged in continuous improvement of such a dialogue system for several years.
It is a fundamentally different approach, when the dialogue is automatically constructed by a system built using machine learning technology. The system learns using dialogs, and as a result, talking with it can be very similar to communicating with a human from the very beginning. It is clear that more effort is required at the initial stage to build such a technology. However, if you know how to use pre-trained models, this problem is largely smoothed out. Especially, as the amount of data you deal with grows.
Does that mean that a system based on machine learning is more promising?
Strategically, I believe, the future lies precisely in the second direction. However, much remains to be done. Today we combine both approaches in our work.
If you compare them, then the advantages of the first one include an easy start and the ability to make changes quickly. The pluses of the second are the advantage in working with a large number of topics and more complex dialogs.
The implementation of the second approach requires a sufficient amount of data, which, as I said, is a typical challenge for any machine learning system.
What do you think, when voice solutions using AI will become relevant for small and medium-sized businesses? According to VTsIOM, now the main consumers of such solutions in the Russian Federation are big businesses.
As I said, as a rule, there are two advantages on the side of large business: the availability of financial resources and data.
Thanks to the first one, big business is tempted to invest in the in-house development of such solutions. However, self-development is not always a rational choice. In most industries, in particular in the field of voice technology, there are leading companies with scalable technology, the necessary expertise, and developed tools. The use of ready-made solutions from such companies can guarantee fast achievement of results. So, today, availability of large budgets for your own development is no longer a key filter.
The situation with the availability of sufficient data volume is also not at all hopeless. The task of reducing the amount of data required for machine learning has long attracted the attention of scientific and applied research teams all over the world. For example, I have already said that we achieve high accuracy of speech recognition by training the system on customer data. But I did not mention that the amount of data we need for this is two to three orders of magnitude lower than the amount of data on which, for example, Google’s speech recognition system is trained. Similarly, the amount of client data required to train AI-based dialogue management systems is also not as large as one would expect.
As an example, I can mention the problem that we solved for Rent-a-Ride, a startup that provides private owners’ cars for short-term rental.
Despite having a convenient website and application, 25% of this company’s clients prefer to make applications by phone. The outsourcing contact center operators made mistakes or didn’t transmit all the information received to Rent-a-Ride employees. During the transfer of information, the clients were waiting on the line.
Rent-a-Ride introduced BSS’s neurobot capable of working with retrospective data for high-quality client segmentation to increase revenue, reduce costs, and eliminate errors. The goal is to identify clients who are going to place an order and quickly transfer the call to a manager to close the transaction. Information on the application immediately appears in CRM, where the manager can see all the necessary data. The robot informs about rental restrictions, for example for taxi drivers, identifying them by keywords and phrases (car for work, high mileage, etc.) The neurobot also generates a stream of structured data for all calls, constantly improving analytics efficiency.
As a result, a one-and-a-half-fold increase in revenue was achieved against the applications received by telephone. The burden on managers decreased by more than 20%. Errors in segmenting clients were fixed. Time to identify client needs and transfer application to the manager was reduced to 2 minutes.
In general, if you have more than 10 operators in the contact center, and there are typical questions that account for 5-10% of calls, then your own data is already enough to finalize training and configure an AI-based voice assistant. If you have only 1-2 operators, then your data is really insufficient. However, it must be kept in mind that typical questions are usually similar for the entire industry. So, we will inevitably witness the emergence of industry-standard solutions suitable for immediate use by small businesses soon.
Does BSS have such a ready-made industrial solution for which you can offer not only implementation, but also control, support, and development?
We already offer banks simple and easy entry into voice technology and dialogue management. We have an interesting remote banking offer.
Actually, we propose nothing more than a new RBS channel. The fact is that, despite the availability of Internet banking, mobile banking, chats with a bank, etc., a significant proportion of clients, oddly enough, continue to call the bank. Banks are automating more and more functions of the RBS, and, at the same time, are spending huge amounts of money on maintaining and developing contact centers in which service is carried out manually. As a rule, 30-40% of calls to a contact center relate to typical issues that have long been resolved in the RBS: what is the account balance, currency exchange rate, debt amount, where is the nearest branch, payment execution status, and the like. So, all such voice or text inquiries can be quickly automated using our Digital2Speech solution right now.
Digital2Speech receives calls, understands spoken language, and automatically services. An option is available when the system will switch to a competent operator, while providing him or her with the already collected data about the client’s inquiry. In general, the solution provides the client with the most comfortable and natural way to communicate with the bank and removes unnecessary costs for the bank.
What is its competitive advantage?
Firstly, BSS’s voice bank does not require integration with the bank’s contact center, which significantly reduces the time and budget for implementation. Digital2Speech is a voice and text channel built within the RBS. It can serve both individuals and legal entities. If a bank already uses Digital2Go RBS by BSS, then the implementation of the system and its launch will take up to 3 weeks.
Secondly, Digital2Speech is an on-premise solution, which means that the requirements of information security and the law on personal data are observed, and customer experience data is completely under the control of the bank. It is a sensitive issue for the financial sector.
Thirdly, simple maintenance. In addition, a bank has the opportunity to independently improve and develop the system. For this, bank employees do not require special skills in the field of machine learning and data management.
Fourth, easy scalability. The solution uses proven models of communication with clients.
The economic effect is also important. Digital2Speech helps lower costs for operators, developers, and engineers. It promotes the growth of positive consumer experience of bank clients. The solution can also be deployed as a standalone service, without Digital2Go platform.
The bulk of banks should not forget that leading banks are already actively using speech recognition systems: both voice assistants and speech analytics. If you ignore this fact, the gap to the leaders will increase and may become critical for business in the future. Right now, the voice assistant in a contact center is becoming a prerequisite for high-quality banking services. The role and importance of voice technology in banks will only grow from year to year. For example, Alfa-Bank recently announced that the use of voice technology and other AI elements has already increased sales in the contact center by 9-12%.
About a third of respondents (28%) do not see the prospects of introducing AI in their industry. In Russia, AI is being actively introduced only in banks, retail, and telecom. Does this correlate with your statistics and implementation experience? Which industries, in your opinion, are promising?
Yes, there are inherently a lot of client interaction and client data in these areas. It is natural that we see the first examples of the use of AI here. But AI is also used in other industries. AI can be very helpful, for example, in medicine. A lot of effort is spent on the development of expert systems trained in diagnostics based on the results of heavy hardware research. There are other interesting examples. Technologies similar to voice biometry can be used for the initial diagnosis of a variety of respiratory or pulmonological diseases. Examples of this kind can already be found in agriculture.
There are other data as well. A study by MIT Sloan Management Review and BCG was published, in which more than 2.5 thousand company executives in 27 industries around the world were interviewed. As a result, 90% of entrepreneurs, one way or another, invest in AI. The global market is more optimistic. What do you think about it? What are BSS’s plans for AI?
I have very high expectations in this area. We have seriously invested in AI and are currently able to compete even with technology giants.
Moreover, we do not limit ourselves with banks. We look at retail, there are interesting projects in other areas. I will not name them for now, but there are many plans. I think BSS will show the market new interesting implementations in the field of AI and voice technology very soon.
And in conclusion of our conversation, please say a few words about the situation in the world that has arisen due to the pandemic. Is there a place for AI here?
All countries are now experiencing unprecedented restrictions on personal movement and the transfer to remote work of a huge number of people. Assessing the challenges that arise in connection with this, I would like to note the following.
First, do not consider the use of AI solely as a measure to reduce costs or smooth out peaks in client requests. Think about the benefits that it can bring you in extreme cases, for example, in the current emergency.
Secondly, do not postpone changes, but do not try to achieve maximum in one step. Start with small steps and go to the result sequentially. I think this is true for any business, but especially true for AI.
Thirdly, being bound to the physical location of offices is increasingly turning from an advantage into a burden. The main things in business are the team and your unique technologies. To succeed in today’s world, your own AI must become part of this main core.