Artificial Intelligence (AI) is a technical subject. Companies place AI into the domain of the IT department. It is technical, and it uses data, and it involves computers – AI is then IT’s responsibility.
I am not saying this is not the right place for it. What I want to communicate is AI thinking is probabilistic while most programmers are deterministic in their thinking.
Programmers and other IT people can change their perspectives. The mental change is easy to make for tasks inside of projects, but it does require the IT person to understand the difference. Let me explain.
In non-AI related cases a programmer creates a screen that requires input of 5 fields from the user. The program takes the input of each field and distributes it to 5 different files in the database. That is how it is supposed to work. The program needs to do the same process time after time. That is a deterministic way of thinking – the program must do a task. If the task is not done exactly as programmed there is a problem that needs to be fixed.
AI requires a more probabilistic mindset. The difference in perspective is frustrating many programmers and may be killing many internal AI projects.
An AI (or machine learning) process is predictive. In any prediction there is variance. Simple example: an AI model for sales activity might predict 20 units will be sold today and 25 tomorrow. When the reality is 18 units are sold today and 26 are sold tomorrow. Those numbers may have been within the error range of the model which means the model is likely doing its job well.
To someone who is looking at the results deterministically, the model was wrong. The deterministic thinker will go back and try to tinker with the model until it is “perfect”. In the real world no predictive model is ever going to be perfect. Perfect is neither necessary or realistic. What comes to mind is the cliché “the perfect is the enemy of the good”.
Most IT people understand the difference on the surface at least once it is explained to them. Problems come when they revert, often unintentionally, to viewing the results of AI models deterministically and looking for perfect predictions.
That brings to mind another old saying “better a diamond with a flaw than a pebble without one”. Let me coin a new saying. Close is good enough in horseshoes, atomic bombs, and predictive analytics.