Bad Users Can Fail A Good AI System

This post was originally published on this site
Advertisement

A good artificial intelligence (AI) solution in the hands of bad users can be disastrous, while an average AI solution in the hands of good users can be a great success. Hence, it’s important to educate the users to extract maximum positive value out of it.

If users or other interacting systems are not good enough, then no matter how intelligent your artificial intelligence (AI) system is, it will eventually fail to deliver. The failure may not be the only outcome, but in some cases, it may also result in business risks.

AI systems are not standalone as these often interact with several other systems and humans, too. So, at each interaction point, there is either a chance of failure or degraded performance.

There are a variety of users

We can classify computer users by their roles or expertise levels. In case of role-based classifications, they look like administrators, standard users, or guests. Whereas, skill-based groupings put them in categories such as a dummy, general user, power user, geek, or hacker.

All these categories have user levels that are just good enough to use the computer or any software installed on it. However, if users’ expertise is a border-line scenario of being good enough or below, they would soon become bad users of technology. So much so that they can cause a relatively good computer system to come to a halt, including AI.

Additionally, I have also seen that the following user categories are dangerous enough to cause problems:

Creative folks

Creative users are generally skilled enough to use the tool, but they often use it beyond its specified use. Doing that may often render the tool useless or break it.

I remember an interesting issue during my tenure with LG Electronics. One of the products LG manufactured was washing machines. A typical home appliance that a normal user would use for washing clothes.

However, when there were several field failure reports from service centres, especially from the North-West part of India, we were stunned by the creativity of washing machine users.

Restaurant owners in Punjab and nearby regions in India were using these machines for churning lassi at a large scale. Churning lassi requires more human strength due to its thick texture, especially if you are making it in large commercial quantities.

This is why restaurant owners started using top loader washing machines for making lassi. However, this caused operational issues due to unintended and unspecified usage of the appliance and resulted in an influx of a large number of service calls. This kind of creativity looks interesting at face value but certainly causes problems with technology tools.

Another example of such creativity would be the use of Microsoft Excel in organisations. How many companies have you seen where Excel is not only used for tabulation and record-keeping but also being used for small scale automation by running macros?
How many times have you seen people using PowerPoint for report making instead of creating presentations?

All these are creative uses of tools and may be okay to use once in a while. However, the users are mostly abusing the system and tools, which can cause unintended damages and losses to the organisation. These types of users also expose companies to more substantial risks.

These naughty types of users are not productive and do not mean any direct harm. These users are merely toying with the system and may cause unknown issues, especially with AI systems.

If your AI system has a feedback loop where it gathers data for continuous training and adjustments, this may be an issue as any erroneous or random data can disturb the set process and models.

The users that are deliberately acting bad and trying to sabotage the system could be disgruntled employees.

Sometimes these types of users think that the AI system is no better than them, and they must teach it a lesson. They deliberately make attempts to fail the system at every chance they get.

Mostly, deliberate users do it with some plan. These types of users are difficult to spot in the early stages.

Luddites

A classic example of bad users would be Luddites. These are people who are, in principle, opposed to new technology or ways of working.

Luddites were a secret oath-based organisation of English textile workers in the 19th century, a radical faction that destroyed textile machinery as a form of protest. The group was protesting against the use of machinery in a ‘fraudulent and deceitful manner’ to get around standard labour practices. Luddites feared that the time spent on learning the skills of their craft would go to waste as machines would replace their role in the industry.

We often use this term to indicate people who oppose industrialisation, automation, computerisation, or new technologies in general. These users, mostly employees, are threatened and affected due to the implementation of new AI systems. If your change management function is doing any good job, these types would be easy to spot.

Bad user versus incompetent user

Incompetence can mean different things to different people. However, in general, it indicates the inability to do a specified job at a satisfactory level.

If users can use the system without any (human) errors and the way they were required to use it, you can call them competent users. Incompetent users often fail to use the system flawlessly on account of their ability (not system’s problems). These users often need considerable help from others to use the system.

Bad users, on the other hand, may be excellent at using the system, but their intent is not a good one.

All incompetent users are inherently bad users of the system; however, bad users may or may not be incompetent. The reason why