The answer is “Yes!”
Artificial Intelligence (AI) is a very intriguing and intimidating term these days, because it is unfolding its potential in almost every industry we see. In its most common form, we can find it in the speech recognition software in our phones. We all know, Siri, Cortana and Google Assistant, right? But here I am taking AI in a much broader domain. I am talking about the machine’s ability to not only to know, but also understand the environment of its operation, taking actions on its own intelligence, and remarkably “learning” and improving itself.
Based on these three tasks that any AI can perform, we can put AI into two distinct categories: Weak AI and Strong AI. Weak AI can mainly perform the task of obtaining, analysing and evaluating its environment, and take very basic actions: like reading your text messages and emails, identifying your voice via voice recognition protocols, converting your speech into text, scanning your fingerprints and making the decisions of access control, etc. This covers almost all of the applications currently available in our standard gadgets.
Strong AI, however, pertains to more creative chatbots i.e., they make more intelligent actions, and they can learn how to improve.
After writing multiple articles on AI and Machine Learning, I am able to significantly identify that we can not only train AI-powered chatbots to provide simple human inputs, but we can also train them for more elaborative procedures, such as software testing.
A 2002 study by the US National Institute of Standards & Technology has found that “the national annual costs of an inadequate infrastructure for software testing is estimated to range from US$22.2 to US$59.5 billion” or about 0.6 percent of the US gross domestic product. This number does not include costs associated with catastrophic failures of mission-critical software such as the $165 million Mars Polar Lander shutdown in 1999, which was attributed to a software glitch.
According to another report, U.S. Department of Defense alone loses over US$4 billion a year due to software failures, says the Artificial Intelligence methods in Software Testing (World Scientific Publishing Co.).
Why is AI needed in software testing?
Software testing consumes a major chunk of time for any product (human resource, effort and money to follow). A human-software-test-engineer would argue the vitality of a human mind in testing as the identification of problems and create the test environments which the brainy developers were unable to think through. Jason Arbon, CEO of Appdiff, explained in one of his interviews that:
“80% of the testing is the repetition of same clicks and checks of the features your software had yesterday, the worst effect these tests have is the hours and hours of human resource and heaps of cost you spend on them. This work can be done by AI and automation. That other 20% of a tester’s time these days, the creative, questioning, reasoning part—that is what people should really be doing, and that rarely happens.”
The current era vs. the future
In automating the testing process, maintaining the code according to QA Tips and Trends with the new features and add-ons is the actual task. The limitation of current testing is that it only looks for bugs where it is told to find and any new feature has no effect on the test result, unless the human-tester kicks in his creative thinking and keeps the test code up-to-date with test cases for such features/add-ons.
AI, on the other hand, can find the depth in everything, the least changes in the software. An AI program that knows the requirements of the end product desired by the user will generate a code for hundreds of test cases in hundred times lesser time than a human tester can. Now what you need to do is to feed the chatbot or system with as many examples of software testing as possible, and teach it to differentiate between bugs and features (again, by giving examples of both).
What you are doing is training a bot to become an intelligent software tester. Just as a student would be as great as the teacher, the bot would be more intelligent with the most experienced and creative testers training it. The bot would generate a decision-making algorithm on what the app team calls a “bug” or a “feature”.
But still, the thought that machines are testing app functionality and finding bugs is not plausible, right. Well, TCS has developed a 360 Degree Assurance tool that leverages AI, machine learning, and cognitive software. It is currently happening at a faster pace than ever and I do not wonder why and how. Because, to be honest, an AI-powered system performs the same tests as any QA company or you do. The same routine tests that you perform on an app to qualify or quantify it.
A software engineer himself is only testing a few bunch of test cases and after every feature addition, he is just making sure that all the previous ones function properly and no matter how much you hate this though, an AI is much better and faster than going through these tests than us.
The views expressed here are of the author’s, and e27 may not necessarily subscribe to them. e27 invites members from Asia’s tech industry and startup community to share their honest opinions and expert knowledge with our readers. If you are interested in sharing your point of view, submit your post here.
Featured Image Copyright: ktsdesign / 123RF Stock Photo