Join the Community

20,823
Expert opinions
43,682
Total members
385
New members (last 30 days)
165
New opinions (last 30 days)
28,189
Total comments

CHATBOTS: THE LIMITATIONS OF NATURAL LANGUAGE PROCESSING

Be the first to comment 3

The most common misperception about Chatbots is that Natural Language Processing (NLP) is the only method for delivering conversation-as-a-service. Though this is not true, as covered in earlier articles, it is important to understand some of the NLP limitations.

The popularity of Chatbots naturally being able to converse with people generally started in 1950 when Alan Turing published an article titled "Computing Machinery and Intelligence". It proposed what is now called the Turing test as a criterion of intelligence. Today, the best universal means for achieving this is NLP, which has been popularized through tech titans, specialist corporates and a growing number of start-ups.

NLP is underpinned by Machine Learning, which enables the Chatbot to learn without being explicitly programmed. The process involves the ingestion of data, whereby the Chatbot is taught to self-learn through a series of training cycles.

No reasonable person thinks that Artificial Intelligence (AI) in the form of Machine Learning is close to becoming a Singularity, all knowing. This is simply not going to happen for the foreseeable future. There is no doubt that AI is and can continue to outperform humans in specialist bounded areas of knowledge. Robotics in manufacturing proved this at an industrial scale since the 1980s.  

There are many different types of Machine Learning, which after all are algorithms. Understanding the strengths and weaknesses of these algorithms becomes important in the context of the targeted application. 

Care needs to be taken when applied in areas such as regulatory, statutory, policy and procedural practices. A simple illustration is that you cannot empower Machine Learning to rewrite regulatory, statutory, policy and procedural matters, which are an intrinsic way for managing governance, risk and compliance.

In this targeted area, the assurance of it has been thoroughly tested is not a guarantee when underpinned by Machine Learning. Why?

Machine Learning does not perform well if it is subsequently fed incomplete or wrong data. More worryingly, Machine Learning does not have the ability to stop over learning. This is a skill a human has when instinct says, “I now know enough”. The human capability knows that over learning simply can start to confuse or cloud matters.

Machine Learning does not have this instinctive human capability. Therefore, it can lead to a slippery slope, whereby the Chatbot’s judgement becomes impaired. The consequence is decision contamination that might happen very quickly or be gradual and difficult to detect, until it is plainly obvious that harm has already been done. 

In areas such as regulatory, statutory, policy and procedural matters, decision precision and transparency of the rationality is an area best controlled by subject matter experts. Stated simply, transparency is one in which it is feasible to discover how and why the Chatbot made each decision. This is important for building trust, governance, risk, compliance, evidence, auditability and quality improvements.

NLP is an important method for delivering Chatbot services. But, it is just one of the methods. NLP is currently being over-hyped, which naturally leads to disillusionment. However, conversation-as-a-service is unstoppable, and we are simply on a journey of enlightenment.

  

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

20,823
Expert opinions
43,682
Total members
385
New members (last 30 days)
165
New opinions (last 30 days)
28,189
Total comments

Now Hiring