0% found this document useful (0 votes)
119 views18 pages

Refernces 2

Uploaded by

Pravin Kale
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
119 views18 pages

Refernces 2

Uploaded by

Pravin Kale
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 18

Artificial Intelligence (AI) is a concept that has been part of public discourse for decades, often depicted within

science fiction
films or debates on how intelligent machines will take over the world relegating the human race to a mundane servile existence in
supporting the new AI order. Whilst this picture is a somewhat caricature-like depiction of AI, the reality is that artificial
intelligence has arrived in the present and many of us regularly interact with the technology in our daily lives. AI technology is
no longer the realm of futurologists but an integral component of the business model of many organisations and a key strategic
element in the plans for many sectors of business, medicine and governments on a global scale. This transformational
impact from AI has led to significant academic interest with recent studies re- searching the impacts and consequences of the
technology rather than the performance implications of AI, which seems to have been the key research domain for a number of
years.
The literature has offered various definitions of AI, each en-capsulating the key concepts of non-human intelligence
programmed to perform specific tasks. Russell and Norvig (2016) defined the term AI to describe systems that mimic cognitive
functions generally associated with human attributes such as learning, speech and problem solving. A more detailed and perhaps
elaborate characterisation was presented in Kaplan and Haenlein (2019), where the study describes AI in the con- text of its
ability to independently interpret and learn from external data to achieve specific outcomes via flexible adaptation. The use of
big data has enabled algorithms to deliver excellent performance for spe- cific tasks (robotic vehicles, game playing,
autonomous scheduling etc.) and a more pragmatic application of AI rather than the more cognitive focussed – human level
AI where the complexities of human thinking and feelings have yet to be translated e ffectively ( Hays & Efros, 2007; Russell
& Norvig, 2016). The common thread amongst these definitions is the increasing capability of machines to perform specific
roles and tasks currently performed by humans within the workplace and society in general.
The ability for AI to overcome some of the computationally in- tensive, intellectual and perhaps even creative limitations of
humans, opens up new application domains within education and marketing, healthcare, finance and manufacturing with
resulting impacts on pro- ductivity and performance. AI enabled systems within organisations are expanding rapidly,
transforming business and manufacturing, ex- tending their reach into what would normally be seen as exclusively human
domains (Daugherty & Wilson, 2018; Miller, 2018). The era of AI systems has progressed to levels where autonomous
vehicles, chat- bots, autonomous planning and scheduling, gaming, translation, medical diagnosis and even spam fighting can be
performed via ma- chine intelligence. The views of AI experts as presented in Müller and Bostrom (2016), predicted that AI
systems are likely to reach overall human ability by 2075 and that some experts feel that further progress of AI towards super
intelligence may be bad for humanity.
1. Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Malaysia:Pearson
Education Limited.
2. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who's the fairest in the land? On the
interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25.
3. Hays, J., & Efros, A. A. (2007). Scene completion using millions of photographs. ACM
Transactions on Graphics (TOG), 26(3), 4.
4. Wilson, J., & Daugherty, P. R. (2018). Collaborative intelligence humans and Al are joining forces.
Harvard Business Review, 96(4), 115–123.
5. Wilson, J., & Daugherty, P. R. (2018). Collaborative intelligence humans and Al are joining forces.
Harvard Business Review, 96(4), 115–123.
6. Miller, S. (2018). AI: Augmentation, more so than automation. Asian Management Insights,5(1), 1–20.
7. Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert
opinion. Fundamental issues of artificial intelligence. Cham: Springer555–572.
Application domains
The AI literature has identified several separate domains in which the technology can be applied:
Digital Imaging, Education, Government, Healthcare, Manufacturing, Robotics and Supply Chain.
Studies have analysed the impact of AI and its potential to replace humans via intelligent automation
within manufacturing, supply chain, production and even the construction industry (Kusiak, 1987;
Muhuri, Shukla, & Abraham, 2019; Parveen, 2018). EXisting factory processes will be increasingly
subject to analysis to ascertain whether they could be automated (Lee, 2002; Löffler & Tschiesner,
2013; Yang, Chen, Huang, & Li, 2017). AI centric technologies will be able to monitor and control
processes in real time offering significant efficiencies over manual processes (Jain & Mosier, 1992;
Zhong, Xu, Klotz, & Newman, 2017a). Organisations have posited the benefits of integrating AI
technologies in the development of intelligent manufacturing and the smart factory of the future (Li,
Hou, Yu, Lu, & Yang, 2017; Nikolic, Ignjatic, Suzic, Stevanov, & Rikalovic, 2017). The literature
has generally moved on from the somewhat dated concepts of AI based machines replacing all
human workers. Studies have recognised the realistic limits of the continuing drive to automation,
highlighting a more realistic human in the loop concept where the focus on AI is to enhance human
capability, not replace it (Katz, 2017; Kumar, 2017). Humans are likely to move up the value chain
to focus on design and integration related activities as part of an integrated AI, machines and human
based workforce (DIN & DKE, 2018; Jonsson & Svensson, 2016; Makridakis, 2018; Wang,
Törngren, & Onori, 2015a; Wang, Li, & Leung, 2015b; Wang & Wang, 2016). Manufacturing
organisations are likely to use AI technologies within a production environment where intelligent
machines are socially integrated within the manufacturing process, effectively functioning as co-
workers for key tasks or to solve significant problems (Haeffner & Panuwatwanich, 2017).
Khanna, Sattar, and Hansen (2013) emphasised the importance of AI in healthcare, particularly in
medical informatics. There is a growing requirement for new technologies that understand the
complexities of hospital operations and provide the necessary productivity gains in resource usage and
patient service delivery. AI has the potential to o ffer improved patient care and diagnosis as well as
interpretation of medical imaging in areas such as radiology (Dreyer & Allen, 2018; Kahn, 2017).
Screening for breast cancer (BC) and other related conditions could be more accurate and efficient
using AI technology. Houssami et al.’s (2017) study analyses the use of AI for BC screening
highlighting its potential in reducing false-positives and related human detection er- rors. The study
acknowledges some of the interrelated ethical and so- cietal trust factors but the boundaries of reliance on
AI and acceptable human in the loop involvement is still to be developed. The application of AI and
related digital technologies within public health is rapidly developing. However, collection, storage, and
sharing of AI technology derived large data sets, raises ethical questions connected to govern- ance,
quality, safety, standards, privacy and data ownership (Zandi, Reis, Vayena, & Goodman, 2019).
Thesmar et al. (2019) posited the benefits of utilising AI technology for insurance claims within health-
care. Claim submission, claim adjudication and fraud analysis can sig- nificantly benefit from AI use.
Education and information search is an area where the literature has identified the potential benefits of
AI technology solutions. Chaudhri, Lane, Gunning, and Roschelle (2013) discussed application of AI in
education to improve teacher effectiveness and student engagement. The study analysed the potential
of AI within education in the context of intelligent game-based learning environments, tutoring
systems and intelligent narrative technologies. The relevance of libraries in the modern technology era
has received focus within the literature. Arlitsch and Newell (2017) discussed how AI can change library
processes, staffing requirements and library users. It is important for libraries to focus on human
qualities and the value add of human interaction in- tegrated with AI to provide a richer user
experience. Moreover, Mikhaylov, Esteve, and Campion (2018) considered the use of AI cap- abilities
from the perspective of educating the public on policy and a more effective mechanism for high
uncertainty environments.

Data and information


The topic of big data and its integration with AI has received sig- nificant interest within the wider
literature. Studies have identified the benefits of applying AI technologies to big data problems and
the sig- nificant value of analytic insight and predictive capability for a number of scenarios (Rubik
& Jabs, 2018). Health related studies that have analysed the impact and contribution of big data and
AI arguing that these technologies can greatly support patient health based diagnosis and predictive
capability (Beregi et al., 2018; Schulz & Nakamoto, 2013). Big Data Analytics (BDA) develops the
methodological analysis of large data structures, often categorised under the terms: volume, velocity,
variety, veracity and value adding. BDA combined with AI has the potential to transform areas of
manufacturing, health and business intelligence offering advanced incites within a predictive context
(Abarca-Alvarez, Campos-Sanchez, & Reinoso-Bellido, 2018; Shukla, Tiwari, & Beydoun, 2018;
Spanaki, Gürgüç, Adams, & Mulligan, 2018; Wang and Wang, 2016).
Organisations are increasingly deploying data visualisation tools and methods to make sense of their
big data structures. In scenarios where the limitations of human perception and cognition are taken into
account, greater levels of understanding and interpretation can be gained from the analysis and
presentation of data using AI technologies (Olshannikova, Ometov, Koucheryavy, & Olsson, 2015).
The analysis and processing of complex heterogeneous data is problematic. Organi- sations can
extract significant value and key management information from big data via intelligent AI based
visualisation tools (Zheng, Wu, Chen, Qu, & Ni, 2016; Zhong, Xu, Chen, & Huang, 2017b).
Challenges
The implementation of AI technologies can present significant challenges for government and
organisations as the scope and depth of potential applications increases and the use of AI becomes more
mainstream. These challenges are categorised in Fig. 1 and discussed in this section.
Table 2 lists the specific AI challenges from the literature and breakdown subtext of challenge details.
Social challenges
The increasing use of AI is likely to challenge cultural norms and act as a potential barrier within certain
sectors of the population. For ex- ample, Xu et al. (2019) highlighted the challenges that AI will bring to
healthcare in the context of the change in interaction and patient education. This is likely to impact the
patient as well as the clinician. The study highlighted the requirement for clinicians to learn to interact
with AI technologies in the context of healthcare delivery and for pa- tient education to mitigate the fear
of technology for many patient demographics (Xu et al., 2019). Theall et al. (2018) argued that culture
is one of the key barriers of AI adoption within radiology, as patients may have a reticence to interact
with new technologies and systems. Social challenges have been highlighted as potential barriers to the
further adoption of AI technologies. Sun and Medaglia (2019) identified social challenges relating to
unrealistic expectations towards AI tech- nology and insufficient knowledge on values and advantages of
AI technologies. Studies have also discussed the social aspects of potential job losses due to AI
technologies. This specific topic has received widespread publicity in the media and debated within
numerous forums. The study by Risse (2019) proposed that AI creates challenges for humans that can
affect the nature of work and potential influence on people's status as participants in society. Human
workers are likely to progress up the value chain to focus on utilising human attributes to solve design
and integration problems as part of an integrated AI and human centric workforce (DIN & DKE, 2018;
Jonsson & Svensson, 2016; Makridakis, 2018; Wang, Törngren, & Onori, 2015a; Wang, Li, & Leung,
2015b; Wang & Wang, 2016).

Economic challenges
The mass introduction of AI technologies could have a significant economic impact on organisations and
institutions in the context of required investment and changes to working practices. Reza Tizhoosh and
Pantanowitz (2018) focused on the affordability of technology within the medical field arguing that AI is
likely to require substantial financial investment. The study highlighted the impact on pathology

Table 2

AI Challenges from the literature.

AI Challenge Details

Social challenges Patient/Clinician Education; Cultural barriers; Human rights; Country specific
disease profiles; Unrealistic expectations towards AI technology; Country specific
medical practices and insufficient knowledge on values and advantages of AI
technologies.
Economic challenges Affordability of required computational expenses; High treatment costs for patients;
High cost and reduced profits for hospitals; Ethical challenges including: lack of
trust towards AI based decision making and unethical use of shared data.
Data challenges Lack of data to validate benefits of AI solutions; Quantity and quality of input data;
Transparency and reproducibility; Dimensionality obstacles; Insufficient size of
available data pool; Lack of data integration and continuity; Lack of standards of
data collection; Format and quality; Lack of data integration and continuity and
lack of standards for data collection; Format and quality.
Organisational and managerial challenges Realism of AI; Better understanding of needs of the health systems; Organisational
resistance to data sharing; Lack of in-
house AI talent; Threat of replacement of human workforce; Lack of
strategy for AI development; Lack of interdisciplinary talent; Threat to
replacement of human workforce.
Technological and technology Non-Boolean nature of diagnostic tasks; Adversarial attacks; Lack of transparency
implementation challenges and interpretability; Design of AI systems; AI safety; Specialisation and expertise;
Big data; Architecture issues and complexities in interpreting unstructured data.
Political, legal and policy challenges Copyright issues; Governance of autonomous intelligence systems; Responsibility and
accountability; privacy/safety;
National security threats from foreign-owned companies collecting sensitive data,
Lack of rules of accountability in the use of AI; Costly human resources still legally
required to account for AI based decision; Lack of official industry standards of AI
use and performance evaluation.
Ethical challenges Responsibility and explanation of decision made by AI; processes relating to AI
and human behaviour, compatibility of machine versus human value judgement,
moral dilemmas and AI discrimination

laboratories where current financial pressures may be exacerbated by the additional pressures to adopt AI
technologies. Sun and Medaglia (2019) identified several healthcare related economic challenges ar- guing that the
introduction of AI based technologies is likely to influ- ence the profitability of hospitals and potentially raise
treatment costs for patients.

1. Muhuri, P. K., Shukla, A. K., & Abraham, A. (2019). Industry 4.0: A bibliometric analysis and
detailed overview. Engineering Applications of Artificial Intelligence, 78, 218–235. Mullainathan, S., &
Spiess, J. (2017). Machine learning: An applied econometric ap- proach. Journal of Economic
Perspectives, 31(2), 87–106.

2. Parveen, R. (2018). Artificial intelligence in construction industry: Legal issues and reg-ulatory
challenges. International Journal of Civil Engineering and Technology, 9(13), 957–962.

3. Lee, J. H. (2002). Artificial intelligence-based sampling planning system for dynamic manufacturing
process. Expert Systems with Applications, 22(2), 117–133.

4. Löffler, M., & Tschiesner, A. (2013). The Internet of things and the future of manufacturing.
McKinsey & Company Accessed in April 2019. https://www.mckinsey.com/business- functions/digital-
mckinsey/our-insights/the-internet-of-things-and-the-future-of- manufacturing.

5. Yang, J., Chen, Y., Huang, W., & Li, Y. (2017, September). Survey on artificial intelligence for
additive manufacturing. 2017 23rd International Conference on Automation and Computing (ICAC) (pp.
1–6).

6.Jain, P. K., & Mosier, C. T. (1992). Artificial intelligence in flexible manufacturing sys- tems.
International Journal of Computer Integrated Manufacturing, 5(6), 378–384.

7.Zhong, R. Y., Xu, X., Klotz, E., & Newman, S. T. (2017a). Intelligent manufacturing in the context
of industry 4.0: A review. Engineering, 3(5), 616–630.

8.Li, B. H., Hou, B. C., Yu, W. T., Lu, X. B., & Yang, C. W. (2017). Applications of artificial
intelligence in intelligent manufacturing: A review. Frontiers of Information Technology & Electronic
Engineering, 18(1), 86–96.

9. Liu, J., Qi, Y., Yang Meng, Z., & Fu, L. (2017). Self-learning Monte Carlo method. Physical
Review B, 95. https://doi.org/10.1103/PhysRevB.95.041101 041101(R).

10. Nikolic, B., Ignjatic, J., Suzic, N., Stevanov, B., & Rikalovic, A. (2017). Predictive man-
ufacturing systems in industry 4.0: Trends, benefits and challenges. Annals of DAAAM & Proceedings,
28.

11. Katz, Y. (2017). Manufacturing an artificial intelligence revolution. Available at SSRN 3078224.
12. Gupta, R. K., & Kumari, R. (2017). Artificial intelligence in public health: Opportunities and
challenges. JK Science, 19(4), 191–192.

13. Jonsson, A., & Svensson, V. (2016). Systematic lead time analysis. Chalmers University
ofTechnology Accessed April 2019. http://www.publications.lib.chalmers.se/records/
fulltext/238746/238746.pdf.

14. Makridakis, S. (2018). Forecasting the impact of artificial intelligence, Part 3 of 4: The potential
effects of AI on businesses, manufacturing, and commerce. Foresight: The International Journal of
Applied Forecasting, (49), 18–27.

15.Wang, L., Törngren, M., & Onori, M. (2015a). Current status and advancement of cyber- physical
systems in manufacturing. Journal of Manufacturing Systems, 37, 517–527.

16. Wang, X., Li, X., & Leung, V. C. M. (2015b). Artificial intelligence-based techniques for
emerging heterogeneous network: State of the arts, opportunities, and challenges.IEEE Access, 3, 1379–
1391.

17. Wang, L. (2016). Discovering phase transitions with unsupervised learning. Physical Review B, 94,
195105. https://doi.org/10.1103/PhysRevB.94.195105.

18. Wang, L., & Wang, X. V. (2016). Outlook of cloud, CPS and IoT in manufacturing. Cloud- based
cyber-physical systems in manufacturing. Cham: Springer377–398.

19. Wang, X., Li, X., & Leung, V. C. M. (2015b). Artificial intelligence-based techniques for
emerging heterogeneous network: State of the arts, opportunities, and challenges. IEEE Access, 3, 1379–
1391.

20. Khanna, S., Sattar, A., & Hansen, D. (2013). Artificial intelligence in health – The three big
challenges. Australasian Medical Journal, 6(5), 315–317.

21. Kahn, C. E. (2017). From images to actions: Opportunities for artificial intelligence in radiology.
Radiology, 285(3), 719–720.

22. Zandi, D., Reis, A., Vayena, E., & Goodman, K. (2019). New ethical challenges of digital
technologies, machine learning and artificial intelligence in public health: A call for papers. Bulletin of
the World Health Organization, 97(1), 2.

23. Thesmar, D., Sraer, D., Pinheiro, L., Dadson, N., Veliche, R., & Greenberg, P. (2019). Combining
the power of artificial intelligence with the richness of healthcare claims data: Opportunities and
challenges. PharmacoEconomics. https://doi.org/10.1007/ s40273-019-00777-6.

24. Chaudhri, V. K., Lane, H. C., Gunning, D., & Roschelle, J. (2013). Applications of artificial
intelligence to contemporary and emerging educational challenges. Artificial Intelligence Magazine,
Intelligent Learning Technologies: Part, 2(34), 4.

25. Arlitsch, K., & Newell, B. (2017). Thriving in the age of accelerations: A brief look at the societal
effects of artificial intelligence and the opportunities for libraries. Journal of Library Administration,
57(7), 789–798.

26. Mikhaylov, S. J., Esteve, M., & Campion, A. (2018). Artificial intelligence for the public sector:
Opportunities and challenges of cross-sector collaboration. Philosophical Transactions of the Royal
Society A: Mathematical, Physical and Engineering Sciences, 376(2128),
https://doi.org/10.1098/rsta.2017.0357.
27. Beregi, J., Zins, M., Masson, J., Cart, P., Bartoli, J.-, Silberman, B., …, & Meder, J. (2018).
Radiology and artificial intelligence: An opportunity for our specialty. Diagnostic and Interventional
Imaging, 99(11), 677–678.

28. Abarca-Alvarez, F. J., Campos-Sanchez, F. S., & Reinoso-Bellido, R. (2018). Demographic and
dwelling models by artificial intelligence: Urban renewal opportunities in spanish coast. International
Journal of Sustainable Development and Planning, 13(7), 941–953.

29. Shukla, N., Tiwari, M. K., & Beydoun, G. (2018). Next generation smart manufacturing and service
systems using big data analytics. Computers & Industrial Engineering, 128, 905–910.

30.Spanaki, K., Gürgüç, Z., Adams, R., & Mulligan, C. (2018). Data supply chain (DSC): Research
synthesis and future directions. International Journal of Production Research, 56(13), 4447–4466.

31. Olshannikova, E., Ometov, A., Koucheryavy, Y., & Olsson, T. (2015). Visualizing big data with
augmented and virtual reality: Challenges and research agenda. Journal of Big Data, 2(1),
https://doi.org/10.1186/s40537-015-0031-2.

32. Zhong, R. Y., Xu, C., Chen, C., & Huang, G. Q. (2017b). Big data analytics for physical internet-
based intelligent manufacturing shop floors. International Journal of Production Research, 55(9), 2610–
2621.

33. Jain, P. K., & Mosier, C. T. (1992). Artificial intelligence in flexible manufacturing sys- tems.
International Journal of Computer Integrated Manufacturing, 5(6), 378–384.

34. Nikolic, B., Ignjatic, J., Suzic, N., Stevanov, B., & Rikalovic, A. (2017). Predictive man-
ufacturing systems in industry 4.0: Trends, benefits and challenges. Annals of DAAAM & Proceedings,
28.

35. Gupta, R. K., & Kumari, R. (2017). Artificial intelligence in public health: Opportunities and
challenges. JK Science, 19(4), 191–192.

36. Katz, Y. (2017). Manufacturing an artificial intelligence revolution. Available at SSRN 3078224.

Technological and technology implementation challenges

Studies have analysed the non-boolean nature of diagnostic tasks within healthcare and the challenges of applying
AI technologies to the interpretation of data and imaging. Reza Tizhoosh and Pantanowitz (2018) highlighted the
fact that humans apply cautious language or descriptive terminology, not just binary language whereas AI based
systems tend to function as a black bo X where the lack of transparency acts as a barrier to adoption of the
technology. These points are re- inforced in Cleophas and Cleophas (2010) and Kahn (2017) where the research
identified several limitations of AI for imaging and medical diagnosis, thereby impacting clinician confidence in the
technology. Cheshire (2017) discusses the limitation of medical AI-loopthink. The term loopthink is defined as a
type of implicit bias, which does not perform correct reappraisal of information or revision of an ongoing plan of
action. Thus, AI would disfavour qualitative human moral principles. Weak loopthink refers to the intrinsic inability
of computer intelligence to redirect executive data flow because of its fi Xed internal hard writing, un-editable sectors
of its operating system, or unalterable lines of its programme code. Strong loopthink refers to AI suppression due to
internalisation of the ethical framework.

Challenges exist around the architecture of IA systems and the need for sophisticated structures to understand
human cognitive flexibility, learning speed and even moral qualities (Baldassarre, Santucci, Cartoni, & Caligiore,
2017; Edwards, 2018). Sun and Medaglia (2019) reviewed the technological challenges of algorithm opacity and
lack of ability to read unstructured data. The Thrall et al. (2018) study considered the challenge of a limited pool of
investigators trained in AI and radiology. This could be solved by recruiting scientists with backgrounds in AI, but
also by establishing educational programmes in radiology professional services (Nguyen & Shetty, 2018; Thrall
et al., 2018). Varga-Szemes et al. (2018) highlighted that machine learning algorithms should be created by
machine learning specialists with relevant knowledge of medicine and an understanding of possible outcomes and
consequences. Mitchell (2019) highlighted that AI systems do not yet have the essence of human intelligence. AI
systems are not able to understand the si- tuations humans experience and derive the right meaning from it. This
barrier of meaning makes current AI systems vulnerable in many areas but particularly to hacker attacks titled
– “adversarial examples”. In these kinds of attacks, a hacker can make specific and subtle changes to sound, image
or text files, which will not have a human cognitive im- pact but could cause a programme to make potentially
catastrophic errors. As the programmes do not understand the inputs they process and outputs they produce, they are
susceptible to unexpected errors and undetectable attacks. These impacts can influence domains such as: computer
vision, medical image processing, speech recognition and language processing (Mitchell, 2019).

Political, legal and policy challenges

Gupta and Kumari (2017) discussed legal challenges connected to AI-responsibility when errors occur using AI
systems. Another legal challenge of using AI systems can be the issue of copyrights. Current legal framework
needs significant changes in order to effectively pro- tect and incentivise human generated work (Zatarain, 2017).
Wirtz, Weyerer, and Geyer (2019) focused on the challenges of implementing AI within government positing the
requirement for a more holistic understanding of the range and impact of AI-based applications and associated
challenges. The study analysed the concept of AI law and regulations to control governance including autonomous
intelligence systems, responsibility and accountability as well as privacy/safety.

Studies have identified the complexities of implementing AI based systems within government and the public
sector. Sun and Medaglia (2019) used a case study approach to analyse the challenges of applying AI within the
public sector in China. The study analysed three groups of stakeholders – government policy-makers, hospital
managers/doctors, and IT firm managers to identify how they perceive the challenges of AI adoption in the public
sector. The study analysed the scope of changes and impact on citizens in the context of: Political, legal and policy
challenges as well as national security threats from foreign-owned companies.

Ethical challenges

Researchers have discussed the ethical dimensions of AI and im- plications for greater use of the technology.
Individuals and organisa- tions can exhibit a lack of trust and concerns relating to the ethical dimensions of AI
systems and their use of shared data (Sun & Medaglia, 2019). The rapid pace of change and development of AI
technologies increases the concerns that ethical issues are not dealt with formally. It is not clear how ethical and
legal concerns especially around respon- sibility and analysis of decisions made by AI based systems can be
solved. Adequate policies, regulations, ethical guidance and a legal framework to prevent the misuse of AI should
be developed and en- forced by regulators (Duan et al., 2019). Gupta and Kumari (2017) reinforces many of these
points highlighting the ethical challenges re- lating to greater use of AI, data sharing issues and inoperability of
systems. AI based systems may exhibit levels of discrimination even though the decisions made do not involve
humans in the loop, high- lighting the criticality of AI algorithm transparency (Bostrom & Yudkowsky, 2011).

Future opportunities

AI technology in all its forms is likely to see greater levels of adoption within organisations as the range of
applications and levels of automation increase. Studies have estimated that by 2030, 70 per cent of businesses
are likely to have adopted some form of AI technology within their business processes or factory setting
(Bughin et al., 2018). Studies have posited the benefits of greater levels of adoption of AI within a range of
applications, with manufacturing, healthcare and digital marketing developing significant academic interest
(Juniper Research, 2018).

The factories of the future are likely to utilise AI technology ex- tensively, as production becomes more automated
and industry mi- grates to a more intelligent platform using AI and cyber physical sys- tems (Wang & Wang,
2016). Within healthcare related studies, researchers have proposed new opportunities for the application of AI
within medical diagnosis and pathology where mundane tasks can be automated with greater levels of speed and
accuracy (Reza Tizhoosh & Pantanowitz, 2018). Through the use of human biofield technology, AI systems linked
to sensors placed on and near the human body can monitor health and well-being (Rubik & Jabs, 2018). AI
technologies will be able to monitor numerous life-signs parameters via Body Area Networks (BANs) where remote
diagnosis requiring specialised clinical opinion and intervention will be checked by a human (Hughes, Wang, &
Chen, 2012).

AI technologies have been incorporated into marketing and retail where big data analytics are used to develop
personalised profiles of customers and their predicted purchasing habits. Understanding and predicting consumer
demand via integrated supply chains is more cri- tical than ever and AI technology is likely to be a critical integral
ele- ment. Juniper Research (2018) predicts that demand forecasting using AI will more than treble between 2019
and 2023 and that chatbot in- teractions will reach 22bn in the same year from current levels of 2.6bn. The study
highlights that firms are investing heavily in AI to improve trend analysis, logistics planning and stock management.
AI based in- novations such as the virtual mirror and visual search are set to improve the customer interaction and
narrow the gap between the physical and virtual shopping experience (Juniper Research, 2018).

Researchers have argued for the more realistic future where the relationship between AI is likely to transition
towards a human in the loop collaborative context rather than an industry-wide replacement of humans (Katz,
2017; Kumar, 2017). Stead (2018) asserts the im- portance of establishing a partnership where the AI machine will
cal- culate and/or predict and humans will explain and decide on the ap- propriate action. Humans are likely to focus
on more value add activities requiring design, analysis and interpretation based on AI processing and outputs.
Future organisations are likely to focus on creating value from an integrated human and AI collaborative work-
force (Jonsson & Svensson, 2016; Makridakis, 2018; Wang, Törngren, & Onori, 2015a; Wang, Li, & Leung,
2015b; Wang & Wang, 2016).

1. perspectives from invited contributors

This section has been structured by employing an approach adopted from Dwivedi et al. (2015b) to present
consolidated yet multiple per- spectives on various aspects of AI from invited expert contributors. We invited
each expert to set out their contribution in up to 3–4 pages, which are compiled in this section in largely
unedited form, expressed directly as they were written by the authors. Such an approach creates an inherent
unevenness in the logical flow but captures the distinctive orientations of the experts and their recommendations
at this critical juncture in the evolution of AI (Dwivedi et al., 2015b). The list of topics and contributors is
presented in Table 3.
Table 3

Invited contributor subject list.


Title of AI related topic Author(s)

Technological perspectives
EXplainability and AI systems John S. Edwards
Information Theoretic Challenges, Opportunities & Research Agenda Paul Walton
Business and management perspective
A Decision-Making Perspective Yanqing Duan, John Edwards, Yogesh
Dwivedi
AI-enabled Automation Crispin Coombs
Labour Under Partial and Complete Automation Spyros Samothrakis
A Generic Perspective of AI Arpan Kar
Artificial Intelligence for Digital Marketing Emmanuel Mogaji
Artificial Intelligence for Sales Kenneth Le Meunier-Fitzhugh,
Leslie Caroline Le Meunier-
FitzHugh
Complementary Assets and Affordable-tech as Pathways for AI in the Developing World: Case of India
Vigneswara Ilavarasan
Arts, humanities & law perspective
People-Centred Perspectives on Artificial Intelligence Jak Spencer
Taste, Fear and Cultural ProXimity in the Demand for AI Goods and ServicesAnnie Tubadji
Science and technology perspective
Perspectives on Artificial Intelligence in the fundamental sciences Gert Aarts, Biagio Lucini
Science and Technology Studies – Vassilis Galanos
Government and public sector perspective
Artificial Intelligence in the public sector Rony Medaglia
AI for SMEs and Public Sector Organisations Sujeet Sharma and JB Singh
Public Policy Challenges of Artificial Intelligence (AI): A New Santosh K Misr
Framework and Scorecard for Policy Makers and Governments
Technological perspective

Explainability and AI systems – John S. Edwards


EXplainability is the ability to explain the reasoning behind a par- ticular decision, classification or forecast. It has
become an increasingly topical issue recently in both theory and practice of AI and machine learning systems.

Challenges. EXplainability has been an issue ever since the earliest days of AI use in business in the 1980s. This
accounted for much of the early success of rule-based expert systems, where explanations were straightforward to
construct, compared to frame-based systems, where explanations were more difficult, and neural networks, where
they were impossible. At their inception, neural networks were unable to give explanations except in terms of
weightings with little real-world
relevance. As a result, they were often referred to as “black boX”
systems. More recently, so-called deep learning systems (typically neural networks with more than one hidden layer)
make the task of explanation even more difficult.
The implied “gold standard” has been that when a person makes a
decision, they can be asked to give an explanation, but this human explanation process is a more complex one than is
usually recognised in the AI literature, as indicated by Miller (2019). Even if a human ex- planation is given that
appears valid, is it accurate? Face-to-face job interviews are notorious for the risk of being decided on factors (such as
how the interviewee walks across the room) other than the ones the panel members think they are using. This is related
to the difficulty of making tacit knowledge explicit.
There is also a difference between the “how” explanations that are
useful for AI system developers and the “why” explanations that are most helpful to end-users. Preece (2018)
describes how this too was
recognised in the earliest days of expert systems such as MYCIN. Nevertheless, some of the recent AI literature
seems unaware of this; it is perhaps significant that the machine learning literature tends to use the term
interpretability rather than explainability. There are, however, many exceptions such as Adadi and Berrada (2018),
who identify four reasons for explanation: to justify, to control, to improve and to dis- cover.
An important change in context is that governments are now in- troducing guidelines for the use of any type of
automated decision- making systems, not just AI systems. For example, the European Union's General Data Protection
Regulation (GDPR) Article 22 states “The data subject shall have the right not to be subject to a decision based
solely
on automated processing”, and the associated Recital 71 gives the data subject “the right…to obtain an explanation of
the decision reached after such assessment and to challenge the decision”. Similarly, the UK government has
introduced a code of conduct for the use of “data- driven technology” in health and social care (Anonymous, 2018).
In regulated industries, existing provisions about decision-making, such as outlawing “red-lining” in evaluating
mortgage or loan applications, which were first enshrined in law in the United States (US) as far back
as the 1960s, also apply to AI systems.

Opportunities. People like explanations, even when they are not really necessary. It is not a major disaster if
NetfliX® recommends a film I don’t like to me, but even there a simple explanation like
“because you watched < name of film/TV programme > ” is added.
Unfortunately, at the time of writing, it doesn’t matter whether I

watched that other film/TV programme all the way through or gave up after five minutes. There is plenty of scope for
improving such simple explanations. More importantly, work here would give a foundation for understanding what
really makes a good explanation for an automated decision, and this understanding should be transferable to systems
which need a much higher level of responsibility, such as safety-critical systems, medical diagnosis systems or
crime detection systems.
Alternatively, a good explanation for an automated decision may not need to be judged on the same criteria that
would be used for a human decision, even in a similar domain. People are good at re- cognising faces and other types
of image, but most of us do not know how we do it, and so cannot give a useful explanation. Research into machine
learning-based image recognition is relatively well advanced. The work of researchers at IBM and MIT on
understanding the rea-
soning of generative adversarial networks (GANs) for image recognition suggests that “to some degree, GANs are
organising knowledge and information in ways that are logical to humans” (Dickson, 2019). For example, one neuron
in the network corresponds to the concept “tree”. This line of study may even help us to understand how we
humans do
some tasks.
Contrary to both of these views, London (2019) argues that in medical diagnosis and treatment, explainability is
less important than accuracy. London argues that human medical decision-making is not so different from a black boX
approach, in that there is often no agreed underlying causal model: “Large parts of medical practice frequently
reflect a miXture of empirical findings and inherited clinical culture.”
(p.17) The outputs from a deep learning black boX approach should therefore simply be judged in the same way, using
clinical trials and evidence-based practice, and research should concentrate on striving for accuracy.
Lastly, advances in data visualisation techniques and technology offer the prospect of completely different
approaches to the traditional “explanation in words”.

Research agenda. We offer suggestions for research in five linked areas.

• to dif- ferent classes of explainee? EXplanation


Can explanations from a single central approach be tailored
approaches are typically divided into transparency and post hoc interpretation (see e.g. Preece, 2018), the former
being more suitable for “how” explana-
tions, the latter for “why”. Is it possible to tailor explanations from a
single central approach to different classes of explainee (developers, end-users, domain experts…)? For example, a
visualisation ap- proach for end-users that would allow drill-down for more knowl- edgeable explainees?
What sort of explanation best demonstrates compliance with sta- tute/regulation? For example, how specific does it

have to be? UK train travellers often hear “this service is delayed because of delays
to a previous service”, which is a logically valid but completely
useless explanation. Do there need to be different requirements for different industry sectors? What form should the
explanation take – words, pictures, probabilities? The latter links to the next point.

Understanding the validity and acceptability of using probabilities in AI explanation. It is well-known that many
people are poor at dealing with probabilities (Tversky & Kahneman, 1983). Are ex- planations from AI systems in
terms of probabilities acceptable? This is widely used in the healthcare sector already, but it is not clear how well
understood even the existing explanations are, especially in the light of the comments by London mentioned in the
previous section.
Improving explanations of all decisions, not just automated ones. Can post hoc approaches like the IBM/MIT work
on GANs produce better explanations of not only automated decisions, but also those made by humans?
Investigating the perceived trade-off between transparency and system performance. It is generally accepted that there

is an inverse relationship between performance/accuracy and explainability for an AI system, and hence a trade-off
that needs to be made. For ex-

ample, Niel Nickolaisen, vice president and CTO at human resource consulting company O.C. Tanner observed: “I
agree that there needs to be some transparency into the algorithms, but does that weaken the capabilities of the
[machine learning] to test different models and create the ensemble that best links cause and effect?” ( Holak, 2018).
Does this trade-off have to be the case? Could a radical ap- proach to explanation be an outlier to the trade-off
curve?

1. Cleophas, T. J., & Cleophas, T. F. (2010). Artificial intelligence for diagnostic purposes: Principles,
procedures and limitations. Clinical Chemistry and Laboratory Medicine, 48(2), 159–165.

2. Kahn, C. E. (2017). From images to actions: Opportunities for artificial intelligence in radiology.
Radiology, 285(3), 719–720.

3. Cheshire, W. P. (2017). Loopthink: A limitation of medical artificial intelligence. Ethics and Medicine,
33(1), 7–12.

4. Baldassarre, G., Santucci, V. G., Cartoni, E., & Caligiore, D. (2017). The architecture challenge:
Future artificial-intelligence systems will require sophisticated archi- tectures, and knowledge of the brain
might guide their construction. The Behavioral and Brain Sciences, 40, e254.

5. Edwards, S. D. (2018). The HeartMath coherence model: Implications and challenges for artificial
intelligence and robotics. AI and Society, 1–7. https://doi.org/10.1007/ s00146-018-0834-8.

6. Thrall, J. H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K., & Brink, J. (2018). Artificial intelligence
and machine learning in radiology: Opportunities, challenges, pitfalls, and criteria for success. Journal of
the American College of Radiology, 15(3), 504–508.

7. Nguyen, G. K., & Shetty, A. S. (2018). Artificial intelligence and machine learning: Opportunities for
radiologists in training. Journal of the American College of Radiology, 15(9), 1320–1321.

8. Mitchell, M. (2019). Artificial intelligence hits the barrier of meaning. Information (Switzerland), 10(2),
https://doi.org/10.3390/info10020051.

9. Zatarain, J. M. N. (2017). The role of automated technology in the creation of copyright works: The
challenges of artificial intelligence. International Review of Law, Computers and Technology, 31(1), 91–
104.

10. Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of artificial intelligence in the
public sector: Evidence from public healthcare. Government Information Quarterly, 36(2), 368–383.

11. Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision
making in the era of big data – Evolution, challenges and research agenda. International Journal of
Information Management, 48, 63–71.

12. Bostrom, N., & Yudkowsky, E. (2011). The ethics of Artificial Intelligence. In K. Frankish
(Ed.). Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press.

Rubik, B., & Jabs, H. (2018). Artificial intelligence and the human biofield: New oppor- tunities and challenges.
Cosmos and History, 14(1), 153–162.
13. Hughes, L., Wang, X., & Chen, T. (2012). A review of protocol implementations and energy efficient cross-layer
design for wireless body area networks. Sensors, 12(11), 14730–14773.
14. .Wang, L. (2016). Discovering phase transitions with unsupervised learning. Physical Review B, 94, 195105.
https://doi.org/10.1103/PhysRevB.94.195105.
15. Wang, L., & Wang, X. V. (2016). Outlook of cloud, CPS and IoT in manufacturing. Cloud- based cyber-physical
systems in manufacturing. Cham: Springer377–398.
16. .Wang, L., Törngren, M., & Onori, M. (2015a). Current status and advancement of cyber- physical systems in
manufacturing. Journal of Manufacturing Systems, 37, 517–527.
17. Wang, X., Li, X., & Leung, V. C. M. (2015b). Artificial intelligence-based techniques for emerging
heterogeneous network: State of the arts, opportunities, and challenges.
18. Dickson, B. (2019). Explainable AI: Viewing the world through the eyes of neural networks. Available at
https://bdtechtalks.com/2019/02/04/explainable-ai-gan-dissection- ibm-mit/ (accessed 21.03.19).
19. . Holak, B. (2018). Forrester 5 AI predictions for 2019: Pragmatic AI takes hold. Available at
https://searchcio.techtarget.com/news/252453560/5-AI-predictions-for-2019-Pragmatic-AI-takes hold?
src=5828756&asrc=EM_ERU_104394864&utm_content=eru-rd2-
rcpF&utm_medium=EM&utm_source=ERU&utm_campaign=20181203_ERU%20Transmission%20for
%2012/03/2018%20(UserUniverse:%20466834) (ac- cessed 22.03.19).

Table 6
AI opportunities.
Title AI opportunities Contributor
Workforce transition
Modelling explainability In the fields of medical diagnosis and treatment, explainability is perhaps Society is likely to be significantly
less important than accuracy. Opportunities exists in conceptualising AI in impacted by the AI technological
trajectory if as
the context of a black boX approach where outputs should be judged using
com
clinical trials and evidence-based practice to strive for accuracy (London,
ment
2019).
ators
Organisation effectiveness There are a number of opportunities for organisations to utilise AI within
a number of sugg
categories: organisational environment, operations, Interaction, case est,
management automation, governance and adaptiveness. AI can provide the socie
opportunity for organisations to develop both operational and strategic ty
situation awareness and to link that awareness through to action achie
increasingly quickly, efficiently and effectively. ves
Transformational potential of AI Opportunities exist for the development of a greater understanding full
of the real impact of
decision making within organisations using AI in the context of: key auto
success factors, culture, performance, system design criteria. matio
Automation complacency Although, automation complacency and bias can speed up decision n in
making when the
recommendations are correct. In instances where AI provides incorrect next
recommendations, omission errors can occur as humans are either out of 100
the loop or less able to assure decisions. Opportunities exists to explore and years
understand the factors that influence over reliance on automation and how (Müll
to counter identified errors.
er & Bostrom, 2016; Walsh, 2018). The opportunity here for organisations
and government, is the effective management of this transition to mitigate John S. Edwards
this potentially painful change.

Paul Walton

Yanqing Duan, John


Edwards, Yogesh Dwivedi

Crispin Coombs

Spyros Samothrakis
Enabler for platforms The exploration of opportunities as to how AI can be leveraged not only at Arpan Kar
and ecosystems the firm level but as an enabler in platforms and ecosystems. AI may help
to connect multiple firms and help in automating and managing
information flows across multiple organisations in such platforms.
Significant opportunities exist for AI to be used in such platforms to
impact platform, firm and ecosystem productivity.
Enhanced digital marketing AI offers opportunities to enhances campaign creation, planning, targeting, Emmanuel Mogaji
planning, and evaluation. AI offers the opportunity to process big datasets
faster and more efficiently. Opportunities exist for more innovative and
relevant content creation and sharing using AI tools and technologies. Kenneth Le Meunier-Fitzhugh
Sales performance Opportunities exist for improving the sales performance using AI driven & Leslie Caroline Le
dashboard, predictive and forecasting capability and use of big data to Meunier-FitzHugh
retain and develop new customer leads.
Additionally the use of AI algorithms can contributing to productivity and
provide sales process enhancement through elimination of non-productive P. Vigneswara Ilavarasan
activities and removal of mundane jobs.
Emerging markets The presence of complementary assets are likely to influence the transition
to AI in the
developing world. Opportunities exist for the lessons learnt from India and Jak Spencer
Kenya to benefit similar low income countries in future. For instance,
Pakistan, Vietnam, and others are imitating the success story of the Indian
software services exports story.
People centred AI AI can potentially be used to enhance ‘softer’ goals rather than the drive
to economic
productivity or efficiency. The genuine needs of people can be identified
that can solve real- world problems. As our interactions with machines start
to become more and more human-like, the opportunity lies in the design of
new personalities and the creation of new types of relationship.
Taste fear and cultural Opportunities exist in the focus on market taste, fear and cultural proXimity Annie Tubadji
proXimity to improve organisational use of AI. While their attention is currently
focused on the pros from efficiency gains, they might be overlooking the
market reaction to the integration of AI in their production process.
Learning about tastes informs the market about AI-generated products and
services. Learning about fear within AI-related social opinions and policy-
making tendencies can help us make evidence-based AI-related decisions.
Learning about the importance of cultural proXimity in the context of AI-
human cultural distance can help to quantify the cultural gravity effect that
bounds our consumption of AI-goods and products.

You might also like