See our latest developments   Know More

Details

  • Admin • 29 September 2021

NLG - Why and How It Makes Sense!

NLG refers to Natural Language Generation. To perform NLG, various techniques can be used and as you can guess, Deep Learning is becoming the norm these days for NLG as well. Let’s call it “Deep NLG” !.Deep NLG is currently transforming the conversational AI space. Yes, your future smart assistants are going to be powered by sophisticated Deep NLG techniques and they will be virtual humans rather than robots.
In most of the conversational AI use cases, the responses coming from the bot are always hard-coded by the developer. With the introduction of Deep Learning techniques, natural language understanding (NLU) aspects got hugely improved. But still after understanding what the user says, the bot presents a hard-coded response, strategically chosen to pretend human-like responses. As you can guess, this doesn’t go well in the real world or at least to make it work smoothly, you need to put a huge effort for planning out hard-coded-responses. This is one of the reasons why these “chatbots” we meet everyday sound so strange and never deliver the intended user experience. These hard-coded-response based systems cannot be technically labeled as Artificial Intelligence by definition, even though we often tag them as “AI”. So, here onwards, my apologies for misusing the word “AI” !
Here is the gap !. When it comes to implementations of conversational AI, we have to separate prebuilt use cases such as Google Meena from Conversational AI platforms such as Google Dialog Flow. In conversational AI platforms, we let developers build conversational use cases according to custom requirements. As you may have noticed, Deep NLG is not featured in the mainstream platforms. Not because they forgot to implement, but there are more fundamental issues preventing Deep NLG from being introduced into Conversational AI platforms, while it is getting highly popular in pre-built conversational agents such as Google Meena. But why ?.
So, this “why” is partially linked to our previous article on “A bottleneck in conversational AI implementations”. In the current article we will dive to this “why” and also we will suggest how to make sense out of it. First thing to highlight is that in a typical conversational AI use case implementation, we cannot assume the availability of a huge dataset to train a Deep NLG model end-to-end. The reasons are clear:

  • Most of the time, the required huge dataset is not available.
  • In the case it is going to be available, how do you customize it to fit the client requirements which are changing throughout the maintenance cycle?

(Ex: Let’s say your client simply wants the order of some questions changed and a small modification to the conversational flow. You have to modify the data in order to represent this requirement and train the model again. Further, you have to make sure that the other aspects of the bot remain the same after the change. This gets really messy !)

Looking at the above constraints, and considering the bottleneck we discussed in our previous blog post, we are left with a more constrained technical approach around Deep NLG adaptation in Conversational AI platforms. One thing is clear, the NLG approach has to be based on a previously trained neural model. Therefore, transfer learning or few shot learning related approaches or general models where the knowledge context can be fed as an input, would make sense in implementing Deep NLG. While Reinforce Learning (RL) may provide a new dimension to our discussion, RL is yet to gain more maturity before it is going to be applied in complex real-life situations.
However, in all of the above approaches, deployed NLG models have to juggle different contexts during a typical conversational use case since human conversations are highly dynamic in nature. Configuring the Deep NLG model with a single knowledge base and letting the model focus on the right part of the knowledge base (while a conversation is going on), is definitely not a good approach. While this may produce a smooth conversation (assuming you deploy a pretty good Deep NLG model), there is no way of guaranteeing that the flow will go as you would expect. Despite the other challenges preventing NLG adaptation in Conversational AI platforms, one fundamental issue to address is managing the context on which the NLG to be performed.

With that being said, it is clear that we have more of a solutioning challenge rather than a gap in the underlying technology. It is all about how to configure the knowledge base in a given use case and manage it strategically for the NLG purposes. Now the discussion links back to our previous blog post in many ways.
At Cognius, we have carefully thought about this challenge. Our innovative approach allows developers to easily develop and deploy conversational use cases powered by cutting edge NLG techniques and also have the full control over the responses generated by the NLG model. We will discuss the thinking behind our NLG approach in another interesting blog post.