Our previous report explored how mantras such as ‘Move fast and break things' aren't always the wisest approach to sustainable business in a hyper-competitive world. What organisations need is insight from their data into core business processes, so that they can move confidently into new opportunities - as securely as possible in a world of escalating threats.
Cloud platforms and infrastructures increasingly offer such abilities, allowing organisations to scale projects to meet the peaks of the market - or their ambitions. Allied with this are lower and more predictable costs, a seamless upgrade path, and new applications coming on stream within an integrated stack.
In a recent Computing Research survey, access to new product features and technical abilities was identified by over 81 percent of respondents as a key motivation for moving business functions into the cloud, including back-office processes.
One of the major incoming technologies, of course, is artificial intelligence (AI), combined with the rise of automation, software robots - sometimes called digital employees - and other ‘Industry 4.0' technologies.
Together, these offer the promise of a smarter, more connected world, in which organisations can not only analyse historic data, but also live data from sensors and smart environments, and use the findings to predict what may happen in a range of future conditions.
Where does AI fit in?
In this fast-emerging world, new levels of insight are promised from deploying AI's pattern-recognition abilities to reveal connections or trends in data sets of every size and type. Meanwhile, replicable processes can be farmed out to virtual machines via automation, turning human skills into on-demand digital workforces in the cloud.
Such processes complement, rather than replace, human ingenuity by learning about employee tasks and automating the ones that add least value, leaving humans to do what they do best: use their own expertise, intuition, and skills.
These cloud-based technologies apply just as much to internal processes as they do to customer-facing ones. Indeed, Computing found that migrating internal functions to the cloud helped create a better working environment for a large majority of survey respondents. Data sensitivity and regulatory compliance have also been improved, the survey found.
According to Computing Research, IT leaders see strong potential in using AI and automation in the cloud to improve internal functions. Nearly 63 percent identify this as either a major or significant motivation for shifting data and processes onto a hosted platform.
The same percentage view migrating back office functions to the cloud as having led to a very successful outcome in terms of their AI and automation capabilities. AI and machine learning tasks can add heavy loads to processors, so having a scalable resource is essential.
Bias & bad data
But technologies such as AI and automation are not without their risks, with one of the most significant being the automation of bad assumptions about behaviour, markets, organisations, or even society itself.
These flaws can be given a veneer of trust and reliability, simply because they have been output from AI systems - some of which have been adopted poorly on a wave of hype by organisations looking for instant cost savings or productivity gains. That's not what the technology is designed for.
There is also the big-picture risk of creating a world of technology haves and have-nots - an increasingly divided society, rather than a more equal and collaborative one.
A related issue is that of confirmation bias. This is a known problem in scientific research, in which a system's design is weighted, without the designer realising it, to produce a pre-set outcome. In such cases, the only possible output is confirmation of what the designer already believes - even if that assumption is incorrect or based on outdated information.
Linked to this is the risk of automating historic or societal bias, thanks to a legacy of partial, incomplete, or flawed data - or data that has been gathered in a biased environment in terms of people's ethnicity, age, gender, health status, sexuality, religious belief, credit score, employment history, spent criminal records, or even postcode.
Minorities & diversity
For example, there has been widespread controversy about bias against ethnic minorities and women within some AI or facial recognition systems. While it is not impossible that a system's designers have deliberately introduced such a bias, the problem is often related to incomplete data, or data that has been gathered within a biased system over many years.
On other occasions, it is rooted in a lack of diversity in development teams, who don't realise that they have created and trained a system in a closed loop of people who are too similar to each other.
Sometimes bias in data is the result of a more difficult problem to counteract. For example, if reams of data have been gathered about a society or organisation in which a large majority of people fall into one group, then it stands to reason that far more data will have been gathered about that group than any other.
If the purpose of the big data set is simply to reflect the demographics of the group, then this may not be a problem. However, if the same data set is then used to design services that are used equally by everyone - such as a driverless car algorithm, security system, or insurance policy - then it can become a serious problem. Comparatively little data will have been gathered about the minority members, meaning that the data about them is a lot less accurate.
The takeaway is that AI systems are only as good as the training data that has informed their creation, along with the accuracy, fairness, and lack of bias in the information they are being tasked with analysing. So beware of using AI to automate bad assumptions or social prejudices: consider all of these issues at the outset of any business project.