AI in Finance: Benefits and Challenges
AI in Finance: Benefits and Challenges
EXECUTIVE SUMMARY.........................................................................................................6
1 INTRODUCTION.....................................................................................................................8
2 KEY AI TECHNOLOGY IN FINANCIAL SERVICES.....................................................9
2.1 MACHINE LEARNING.........................................................................................................10
2.1.1 Supervised machine learning..................................................................................10
2.1.2 Unsupervised machine learning..............................................................................10
2.1.3 Reinforcement learning...........................................................................................11
2.2 EXPERT SYSTEM................................................................................................................12
2.3 NATURAL LANGUAGE PROCESSING...................................................................................12
2.4 ROBOTICS PROCESS AUTOMATION...................................................................................13
3 BENEFITS OF AI USE IN THE FINANCE SECTOR.....................................................13
3.1 IMPROVING DECISION-MAKING PROCESS..........................................................................14
3.2 AUTOMATING KEY BUSINESS PROCESSES IN CUSTOMER SERVICE AND INSURANCE.......15
3.3 ALGORITHMIC TRADING IMPROVEMENT...........................................................................15
3.4 IMPROVING FINANCIAL FORECASTING..............................................................................16
3.5 IMPROVING COMPLIANCE & FRAUD DETECTION..............................................................16
3.6 REDUCING ILLEGAL INSIDER TRADING.........................................................................17
3.7 REDUCING OPERATIONAL COSTS......................................................................................17
3.8 IMPROVING FINANCIAL INCLUSION...................................................................................17
3.9 STRENGTHENING CYBERSECURITY RESILIENCE................................................................18
3.10 TAKEAWAYS FROM THE INSURANCE SECTOR...............................................................18
4 THREATS & POTENTIAL PITFALLS..............................................................................19
4.1 EXPLAINABILITY AND TRANSPARENCY OF AI-BASED MODELS........................................19
4.2 FAIRNESS OF AI-BASED MODELS......................................................................................19
4.3 LACK OF ACCOUNTABILITY FOR AI OUTPUT....................................................................19
4.4 DE-SKILLING OF EMPLOYEES IN THE FINANCIAL SECTOR..................................................20
4.5 JOB DISPLACEMENT...........................................................................................................20
4.6 DATA PRIVACY CHALLENGES...........................................................................................21
4.7 SYSTEMIC RISK..................................................................................................................21
4.8 HIGH COST OF ERROR.......................................................................................................21
5 CHALLENGES.......................................................................................................................22
5.1 AVAILABILITY AND QUALITY OF TRAINING DATA...........................................................22
5.2 USE OF SYNTHETIC DATA IN AI-MODELS.........................................................................22
5.3 SELECTING THE OPTIMAL ML MODEL.............................................................................23
5.4 LEGACY INFRASTRUCTURE................................................................................................23
5.5 LACK OF APPROPRIATE SKILLS.........................................................................................24
5.6 REQUIREMENT OF BETTER AGILITY AND FASTER ADAPTABILITY...................................24
5.7 AI MODEL DEVELOPMENT CHALLENGES.........................................................................24
6 REGULATION OF AI AND REGULATING THROUGH AI........................................25
6.1 REGULATION OF AI..........................................................................................................25
6.1.1 Risk-based approach to regulating AI....................................................................26
6.1.2 Existing regulation on the use of AI in the finance sector......................................26
6.1.3 The need for Human-In-The-Loop..........................................................................27
1
6.2 REGULATING THROUGH AI...............................................................................................27
6.2.1 The regulatory context.............................................................................................27
6.2.2 The opportunity........................................................................................................28
6.2.3 The adoption of AI by regulators............................................................................28
6.2.4 The horizon..............................................................................................................29
6.2.5 The challenges.........................................................................................................29
6.3 REGULATORY TESTING OF AI...........................................................................................30
7 RECOMMENDATIONS........................................................................................................33
7.1 ACADEMIA.........................................................................................................................33
7.2 INDUSTRY..........................................................................................................................33
7.3 REGULATORS.....................................................................................................................34
REFERENCES...........................................................................................................................35
2
Abbreviations:
AI Artificial Intelligence
API Application Programming Interface
DARPA The Defense Advanced Research Projects Agency ES
Expert System
EU European Union
FCA Financial Conduct Authority
FINRA Financial Industry Regulatory Authority GAN
Generative Adversarial Networks
GDPR General Data Protection Regulation
IDE Integrated Development Environment
LDA Latent Dirichlet Allocation
LLM Large Language Models LSE
London Stock Exchange MDP
Markov Decision Process
MiFID Markets in Financial Instruments Directive ML
Machine Learning
NER Named Entity Recognition NLP
Natural Language Processing
NMF Non-negative Matrix Factorisation
NMT Neural Machine Translation
NN Neural Network
PCS Principal Component Analysis
RegTech Regulatory Technology
RL Reinforcement Learning
RPA Robotics Process Automation
SME Small and Medium-sized Enterprises
SMT Statistical Machine Translation
SVM Support Vector Machines
VAE Variational Autoencoders
XAI Explainable AI
3
Executive Summary
This report examines Artificial Intelligence (AI) in the financial sector, outlining its potential
to revolutionise the industry and identify its challenges. It underscores the criticality of a
well-rounded understanding of AI, its capabilities, and its implications to effectively leverage
its potential while mitigating associated risks.
In its various forms, from simple rule-based systems to advanced deep learning models, AI
represents a paradigm shift in technology's role in finance. Machine Learning (ML), a subset
of AI, introduces a new way of processing and interpreting data, learning from it, and
improving over time. This self-learning capability of ML models differentiates them from
traditional rule-based systems and forms the core of AI's transformative potential.
The potential of AI potential extends from augmenting existing operations to paving the way
for novel applications in the finance sector. The application of AI in the financial sector is
transforming the industry. Its use spans areas from customer service enhancements, fraud
detection, and risk management to credit assessments and high-frequency trading. The
efficiency, speed, and automation provided by AI are increasingly being leveraged to yield
significant competitive advantage and to open new avenues for financial services.
However, along with these benefits, AI also presents several challenges. These include issues
related to transparency, interpretability, fairness, accountability, and trustworthiness. The use
of AI in the financial sector further raises critical questions about data privacy and security.
Concerns about the 'black box' nature of some AI models, which operate without clear
interpretability, are particularly pressing.
A pertinent issue identified in this report is the systemic risk that AI can introduce to the
financial sector. Being prone to errors, AI can exacerbate existing systemic risks, potentially
leading to financial crises. Furthermore, AI-based high-frequency trading systems can react
to market trends rapidly, potentially leading to market crashes.
Regulation is crucial to harnessing the benefits of AI while mitigating its potential risks.
Despite the global recognition of this need, there remains a lack of clear guidelines or
legislation for AI use in finance. This report discusses key principles that could guide the
formation of effective AI regulation in the financial sector, including the need for a risk-
based approach, the inclusion of ethical considerations, and the importance of maintaining a
balance between innovation and consumer protection.
The report provides recommendations for academia, the finance industry, and regulators.
For academia, the report underscores the need to develop models and frameworks for
Responsible AI and the integration of AI with blockchain and Decentralised Finance (DeFi).
It calls for further research into how AI outcomes should be communicated to foster trust and
urges academia to lead the development of Explainable AI(XAI) and interpretable AI.
The finance industry players are advised to be cognizant of data privacy issues when
deploying AI and to implement a robust 'human-in-the-loop' system for decision-making.
Emphasis is placed on maintaining an effective governance framework and ensuring
4
technical skill development among employees. Understand the systemic risks that AI can
introduce is also emphasised.
The regulatory authorities are urged to shift from a reactive to a proactive stance on AI and
its implications. They should focus on addressing the risks and ethical concerns associated
with AI use and promote fair competition between AI-driven FinTech and traditional
financial institutions. The report advocates for regulatory experimentation to better
understand AI's opportunities and challenges. Lastly, fostering collaboration between
regulators, AI developers, and ensuring international coordination of regulations are deemed
pivotal.
These recommendations pave the way for the effective integration of AI in the financial
sector, ensuring its benefits are optimally harnessed while mitigating the associated risks.
5
1 Introduction
This report aims to study the impact of artificial intelligence (AI) on the finance sector, focusing
on its practical applications, challenges, and potential benefits for driving innovation and
competition. As a high-level concept, AI is a broad field of computer science that focuses on
creating models capable of performing tasks that typically require human-like intelligence,
such as understanding natural language, recognising images, making decisions, and learning
from data. These tasks encompass complex problem-solving abilities and human-like decision-
making, which have been a subject of interest for researchers for over seven decades (Agrawal
et al., 2019; Furman and Seamans, 2019; Brynjolfsson et al., 2021).
In recent years, there has been a significant increase in practical applications across
various industries, such as finance, healthcare, and manufacturing, thanks to advancements in
computing power, data storage, and low-latency, high-bandwidth communication protocols
(Biallas and O'Neill, 2020). One reason for AI widespread adoption in sectors like financial
services is its versatility (Milana and Ashta, 2021). A Bank of England and FCA survey in
2022 found that 72% of surveyed firms reported using or developing machine learning
applications (Blake et al., 2022).
The increased use of AI in finance can be partially attributed to intense competition within the
sector (Kruse et al., 2019). The term 'Fintech' has been coined to describe companies that use
digital technologies in their services. Compared to traditional financial institutions, fintech
companies leverage technology to offer innovative and user-friendly financial services,
including mobile payments, online banking, peer-to-peer lending, and automated investment
platforms. Since these are often more convenient, efficient, and affordable financial services
for consumers, they may disrupt traditional financial services through heightened competition,
innovation, and a focus on customer satisfaction. Although traditional financial institutions
were initially reluctant to adapt to these changes, many are now investing in digital technology
and collaborating with emerging fintech firms to stay competitive. For example, Deutsche
Bank sought to invest in supply chain financing and establish a partnership to incorporate
supply chain solutions and technologies into their offerings. They collaborated with Traxpay,
a German fintech company that provides discounting and reverse factoring solutions to its
corporate clients (Hamann, 2021). This partnership has allowed Deutsche Bank to become a
prominent player in the global supply chain financing industry. Using AI, financial institutions
can gain a competitive advantage by introducing innovative services and improving operational
efficiency (Ryll et al., 2020).
The finance industry generates a vast and constantly growing amount of data, including daily
transactions, market trends, and customer information. Such rich data can be harnessed to train
AI algorithms and create predictive models, making it an ideal domain for AI applications
(Boot et al., 2021). These applications can identify patterns and predict future trends in the
market, enabling financial institutions to make more informed decisions about investments and
other financial operations. Moreover, AI can also analyse customer behaviour and preferences,
offering tailored recommendations and personalised services (Zheng et al., 2019). By
leveraging AI applications, financial institutions can optimise operations, reduce costs, and
provide better customer service. With the increasing volume of data generated by the financial
industry, the integration of AI technology is expected to become even more prominent, leading
to more sophisticated applications and further transforming the financial landscape.
Despite the potential benefits of AI in the finance sector, industry experts and academic
research suggest that financial institutions have not been able to fully leveraged potential
(Cao, 2022; Fabri et al., 2022). This is partly because of the numerous challenges and pitfalls
6
of developing and using AI models. One of the primary concerns for customers is the issue of
data bias and representativeness, as improper use of AI can lead to discriminatory decisions
(Ashta and Herrmann, 2021). Furthermore, over-reliance on similar AI models or third-party
AI providers can worsen the situation, potentially leading to the exclusion of certain groups of
customers from the entire market rather than just a single financial institution (Daníelsson et
al., 2022). A comprehensive approach is needed to address these challenges, including
transparent audits of AI models and data sources, regular audits to assess AI algorithms'
accuracy and fairness, and engagement with customers to ensure that their concerns are being
addressed.
As AI becomes increasingly popular in the financial sector, firms face challenges related to the
explainability and interpretability of AI models (Fabri et al., 2022). Such challenges can lead
to reputational damage and a reluctance to adopt AI applications. Firms' stakeholders, including
customers, investors, and regulators, demand transparency and accountability in decision-
making processes. Lack of interpretability can make it challenging for firms to identify and
address errors or biases in their decision-making mechanisms using AI, resulting in legal and
financial consequences. Hence, explainability and interpretability are critical factors for AI's
responsible and ethical use in the financial sector (Fabri et al., 2022). Such challenges emerging
due to the use of AI in finance highlight the importance of effective model governance and
regulations to ensure ethical and responsible use of the technology (Ryll et al., 2020). Such
measures not only increase consumer trust in AI but also help financial firms avoid negative
consequences like legal liability, reputational damage, and the loss of customers. However,
stringent regulations may impose a significant burden on firms seeking to implement AI
systems, including additional costs related to data privacy, security measures, or hiring more
staff to monitor and maintain the technology, these costs may deter some firms, especially
smaller ones with limited resources, from adopting AI.
Regulators are also leveraging AI to enhance the efficiency and effectiveness of regulatory
processes. Organisations such as the London Stock Exchange (LSE) and the Financial Industry
Regulatory Authority (FINRA) are embracing AI as a means of improving their regulatory
capabilities (Prove., 2021). For example, the LSE has partnered with IBM Watson and
SparkCognition to develop AI-enhanced surveillance capabilities, while FINRA uses AI to
identify and prevent stock market malpractices (Prove., 2021). Regulators must balance the
benefits and risks of AI and ensure appropriate safeguards are in place to mitigate negative
outcomes. The recent surge in publications has shed light on various opportunities, challenges,
and implications of AI in the financial services industry (Bahrammirzaee, 2010; Cao, 2020;
Hilpisch, 2020; Königstorfer and Thalmann, 2020).
This report seeks to complement and update previous surveys and literature reviews by
achieving the following objectives: 1) summarising the key AI technology in finance services
based on research from finance and information systems studies; 2) examining the benefits of
AI use and adoption in the finance sector; 3) highlighting potential negative consequences and
threats associated with AI use in the finance sector; 4) addressing and evaluating the challenges,
5) role of regulators in addressing these unintended outcomes while exploring the use of AI to
enhance regulatory work; and 6) providing recommendations to academia, industry, and
regulators.
8
Gomes et al., 2021; Kedia et al., 2018; Mittal and Tyagi, 2019; Umuhoza et al., 2020).
Compared with supervised learning, unsupervised learning can be less interpretable as patterns
and relationships discovered may not be straightforward and intuitive, and it is challenging to
evaluate and validate the results of the analysis as there are no predefined labels or classes to
compare against (Lee and Shin, 2020). Moreover, unsupervised learning can be more
computationally intensive than supervised learning, given the more data and more complex
algorithms required, and it can be more prone to overfitting as it does not have the same level
of guidance or constraint as supervised learning. Some of the commonly used models of
unsupervised learning include:
9
2.2 Expert System
An expert system (ES) is a type of AI system that imitates the expert decision-making abilities
of a specific domain or field. ES utilises information in a knowledge base, a set of rules or
decision trees, and an inference engine to solve problems that are difficult enough and require
human expertise for resolution (Harmon P, King D, 1985). ES consists of three main
components (Metaxiotis and Psarras, 2003; Sotnik et al., 2022):
Knowledge base: It contains domain-specific knowledge and rules that the expert
system utilises to solve specific problems. The knowledge base is typically created by
domain experts and is organised to enable efficient access. The most used technique is
the if-then rule.
Inference engine: This expert system component uses knowledge in the knowledge base
to draw conclusions and make recommendations. It utilises a set of rules or decision
trees to guide its reasoning.
User interface: This enables users to interact with the system, ask questions, and receive
recommendations or advice. It mainly consists of screen displays, a consultation dialog
and an explanation component.
Several factors differentiate expert systems from other mathematical models (Jackson, 1986).
For example, (a) they can handle and process qualitative information; (b) inflexible
mathematical or analogue methodologies do not restrict them and is capable of managing
factual or heuristic knowledge; (c) their knowledge base of ES can be continually expanded
with accumulating experience as necessary; (d) they can deal with uncertain, unreliable, or
even missing data; and (e) they are capable of reflecting decision patterns of users.
ESs are used in various applications, such as financial prediction, credit risk analysis, and
portfolio management (Bisht and Kumar, 2022; Mahmoud et al., 2008; Nikolopoulos et al.,
2008; Shiue et al., 2008; Yunusoglu and Selim, 2013). They can be particularly useful in
situations where the knowledge required is complex and difficult to acquire or where there is a
shortage of human experts in a particular field. However, developing an expert system can be
time-consuming and expensive, and the accuracy depends on the quality of the
knowledge base and the rules used by the inference engine.
11
Herrmann, 2021). This reflects financial institutions' different capabilities and resources and
the scope of their services.
12
3.2 Automating Key Business Processes in Customer Service and Insurance
Financial institutions have benefited from automating key business processes using RPA
algorithms (Wittmann and Lutfiju 2021). Such RPA are mainly used as robo-advisors to
support customer services in retail banking and, more recently, in wealth management (Kruse
et al., 2019). Such robo-advisors can offer automated financial planning services like tax
planning guidance, opening a bank account, recommending insurance policies, giving
investment advice, and many other essential financial services.
Banks can make easy wins in key areas such as untapped client segments, lower acquisition
costs, stronger usage of existing products and services, and improved access and scale by
adopting an AI-first approach to customer interaction (Mckinsey, 2020). Customers are asking
for financial services to be delivered with a wider range of goods and services anytime,
anywhere (Zeinalizadeh et al., 2015). Financial organisations can no longer ignore the
extraordinary advantages of integrating and utilising Robotic Process Automation (RPA)
solutions in their environment. Data collection enhances the user experience and offers
numerous benefits to customers by creating the impression that AI interactions are on par with
those of humans. By providing personal data, consumers can obtain personalised services,
information, and entertainment, frequently for little or no cost.
Access to personalised services also suggests that users will benefit from the choices made by
digital assistants, which successfully match preferences with accessible possibilities without
subjecting users to the cognitive and affective exhaustion that can come with decision-making.
To maintain their competitive advantage and boost profitability, banks are placing increasingly
more strategic importance on RPA. The main advantage of using RPA services in retail banking
is that it allows banks to operate around the clock, deliver cutting-edge services, and improve
client experiences while increasing efficiency and accuracy (Villar and Khan, 2021). The
sharing economy has developed to give consumers more power. Real-time analytics and
messaging require the end-to-end integration of internal resources to fully utilise those
potentials. Banks must modernise their IT architecture and analytical skills to acquire, process,
and accurately analyse client data.
Similar automation is also often used in insurance, for example, in the pre-validation of pre-
approved claims. Dhieb et al. (2019) propose an automated deep learning-based architecture
for vehicle damage detection and localisation. Cranfield and White (2016) explain how
insurance claims outsourcing and loss adjusting firm managed to implement RPA (Robotic,
Cognitive robotic, AI), leading to a team of just four people processing around 3,000 claims
documents a day. Thus, robo-advisors have been helping insurers collect information about
claims and process the gathered information quickly.
13
networks and fuzzy logic to advance algorithmic trading. There are many examples of these
AI-based models, such as an AI-model based on a reinforcement learning algorithm that can
improve stock trading (Luo et al., 2019), an NLP-
to predict stock trading returns (Martinez et al., 2019), and fuzzy logic models for predicting
trends in financial asset prices (Cohen, 2022).
Recently, researchers have also become interested in studying the impact of social media data
on stock performance. The proliferation of advanced AI and ML techniques has facilitated this
research. For example, Valencia et al. (2019) developed a ML to analyse how Twitter data can
predict the price movements of several cryptocurrencies, such as Bitcoin, Ethereum, Ripple,
and Litecoin. Similarly, Wolk (2019) has shown that advanced social media sentiment analysis
can be used to predict short-term price fluctuations in cryptocurrencies.
14
ML techniques can find hidden fraud patterns by considering non-traditional financial data
(Milana and Ashta, 2021). Reinforced ML will most likely be used to model unusual financial
behaviour (Canhoto, 2020; Milana and Ashta, 2021). For example, AI-based models can
recognise fraud patterns by studying annual financial statements to identify the risk of financial
irregularities within an organisation (Wyrobeck, 2020). ML techniques can also be used
successfully for identifying money laundering activities (Ahmed et al., 2022).
15
based transactions and into formal financial services, where they can access a range of services
like payments, transfers, credit, insurance, stocks, and savings (WorldBank, 2020). The
problem of information asymmetry between financial institutions and individuals can be solved
by providing digital financial inclusion through AI access to various social networks and online
shopping platforms, which generate a wealth of personal data (Yang and Youtang, 2020).
People might be able to access credit, save money, make deposits, withdraw, transfer, and pay
for goods and services using a mobile device with AI intelligence. This enables those with
modest incomes to obtain services that are not available to them through the traditional banking
system (Van and Antoine, 2019).
16
4 Threats & Potential Pitfalls
While AI can offer significant benefits for the financial industry, researchers and practitioners
point out that the use of AI comes with many threats/potential pitfalls. Therefore, organisations,
users and regulators must remain cognizant and vigilant of the potential drawbacks associated
with using AI to ensure that this technology is utilised fairly and efficiently.
17
make critical decisions with important implications, such as assigning falsely bad credit scores,
which can deny access to a loan. In cases where such AI-based critical decisions are made
based on inaccurate training or biased and unrepresentative data, it is challenging to determine
who is accountable for these decisions (Ashta and Herrmann, 2021; Fabri et al., 2022).
Machine learning techniques and associated artificial intelligence technologies use historical
training data to determine how to respond in various circumstances. They frequently update
their databases and educational materials in response to fresh knowledge. Two significant
issues arise when attempting to raise awareness of these technologies, which must be
considered. First, decisions are made automatically without human involvement, and mistakes
cannot be tracked. Second, the justification for a decision s formulation might not always be
clear to auditors (DRCF, 2022).
AI is used in a variety of processes, including damage assessment, IT, human resources, and
legislative reform. AI systems can quickly pick up on petitions, policies, and changes made
due to those policies. They are also quick to decide. This strategy raises concerns about
security, social, economic, and political dangers and decision-making accountability. This
further erodes trust in AI-based systems and reinforces the need for AI transparency and
explainability.
19
5 Challenges
While the above section on pitfalls outlines some of the important issues that may stem from
the continuous and wide-spread use of AI in the finance sector, various challenges associated
with using AI would remain. If not addressed appropriately, these challenges may slow the
adoption of AI-based systems in the finance sector.
20
models. Synthetic data can also be limited in its representativeness of real financial data, and
outliers must be considered during the generation process to avoid compromising privacy
(FCA, 2022).
Another significant challenge is ensuring security and privacy when undertaking synthetic data
generation. Synthetic data generation techniques require real data as input, which poses a risk
to consumers privacy rights. Developers must comply with data protection laws to protect
consumers privacy and avoid infringing on their rights. Adequate measures must be in place
to secure the synthetic data and prevent any unauthorised access or misuse of the data. Financial
institutions must address these challenges to effectively leverage synthetic data to develop
accurate and reliable AI models while ensuring the privacy and security of their customers
data (FCA, 2022).
21
5.5 Lack of Appropriate Skills
Using AI also challenges financial organisations as most employees do not possess the
technological expertise required to effectively operate AI systems (Kruse et al., 2019). Many
AI systems require specialised programming, data analytics, and machine learning knowledge.
Without these skills, employees may encounter difficulties comprehending how to properly use
and interpret the outcomes produced by AI systems.
Furthermore, the rapid pace of advancements in AI technilogies can make it hard for employees
to stay up to date with the latest trends and optimal techniques. Hence, organisations might
need to invest in ongoing education and training programs to guarantee that their employees
have the necessary skills and knowledge to use AI systems competently.
Moreover, adopting AI can change job responsibilities and roles (Culbertson, 2018; LinkedIn.,
2017). Certain tasks might become automated, making them redundant, while others may
require fresh skills or an alternative approach to problem-solving. Thus, it is crucial for
organisations to anticipate and plan for these changes and to provide their employees with the
essential support and training to adapt to their new roles and responsibilities.
22
6 Regulation of AI and Regulating through AI
Given the rapid proliferation of AI-based systems in the finance sector and the threats/pitfalls
they may create for individuals, organisations and society, regulators across various
jurisdictions have been investigating how and to what extent they should regulate the use of AI
in the finance sector. To better understand this emerging technology and its benefits, as well as
its associated threats, regulators have also become increasingly interested in harnessing its
innovation potential for regulatory work. This has led to the emergence of algorithmic
decision-
issues: regulation of AI and regulating through AI (Ulbricht and Yeung, 2021; Yeung, 2017).
To address this, regulators use regulatory testing of AI, which we explore in detail below.
6.1 Regulation Of AI
A fundamental aspect of good financial regulation is enhancing public trust by ensuring
markets function well. Consumers should feel safe investing in financial offerings that suit
them without the risk of being defrauded or even misinformed when making the investment.
For regulators to continue maintaining consumer trust in the market in the context of a changing
technology innovation landscape, there is a responsibility to be cognizant of how markets are
evolving and keeping sight of risks that could emerge for consumers when adopting emerging
technologies such as AI along the way to establish the right safeguards.
Digital technologies and data have disrupted entire industries and in many cases, have brought
about products and business models that do not fit well with existing regulatory frameworks
particularly in finance, transport, energy, and health industries. Regulation in these industries
is paramount to safeguard safety and quality standards to ensure the ongoing provision of
critical infrastructures (OECD, 2019). It is now ever more difficult to know what, when, and
how to structure regulatory interventions in a rapidly evolving technological landscape with
immense disruptive potential (Fenwick et al., 2017).
Four main regulatory considerations arise from the growing adoption of new technologies:
Consumer protection: What are the implications for consumers, especially regarding
how their data might be used in the provision of offerings leveraging new technologies,
as well as the risks around investing in the offering.
Competition concerns: What are the implications for the diverse players in the market,
especially smaller firms looking to compete with well-established tech firms that start
providing financial services.
Market integrity: What are the implications for financial stability, especially if many
consumers start investing in risky, unregulated offerings without being subject to
protections.
Operational resilience: What are the implications for the financial market
infrastructure, especially in operational disruption or large-scale cyberattacks within a
rapidly growing dependence on technology. The Covid-19 pandemic showed the
importance of ensuring operational resilience to protect consumers and market
integrity.
With these considerations in mind, deciding on the scope of any AI regulation is not a simple
task, as evident by the cautionary approaches undertaken by regulators around the globe. One
of the challenges is related to the established principle of technology neutrality, which
23
2007, p. 264). Already, however, certain regulations have focused specifically on AI. For
example, while claiming to be technology-neutral, the European Union (EU) AI Act focuses
on one specific class of technology AI. Further, the act has been criticised for evading the
technology neutrality principle by providing a too broad definition of AI as part of its scope,
which currently encompasses AI techniques that are not considered to pose significant risks to
consumers (Grady, 2023).
Other exiting regulatory initiates have also proven that it is not easy to regulate AI use in the
finance sector, given that AI relates to a wide spectrum of analytical techniques applied across
various finance areas. The rapid development of AI techniques also makes determining the
right scope and timing of legislation problematic as regulators want to avoid overregulating,
which may stifle innovative AI use in the finance sector and thus deprive us of AI-related
benefits (see above).
provides robust rules concerning algorithmic trading. Further, various anti-discrimination laws
forbid using statistics that severely denigrate protected characteristics by posing a serious threat
of bias. This type of legal protection is illegal based on the Equality Act of 2010, which forbids
insurers from utilising algorithms that may result in discrimination based on appearance and
physical attributes. This is an undisputed and obvious point. Indirect discrimination may occur
even though the algorithms used in the risk individualisation process are not designed to
analyse physical attributes (Mann and Matzner, 2019). However, the actual outcomes of the
individualisation achieved by the algorithms would be particularly harmful to people who have
a protected attribute. This type of discrimination, sometimes known as unintentional proxy
24
discrimination, is widely believed to be inevitable when algorithms are used to look for
relationships between input data and goal variables, regardless of the nature of these
relationships (Prince and Schwarcz, 2019) For example, the programme would not
purposefully discriminate against people based on their gender. Some proxies, such as the
colour or brand of the car, on the other hand, may unintentionally reproduce biases or
unintended outcomes that a person would not deliberately incorporate into the system.
Regulators, however, have continued to evaluate whether the growing use of AI in the finance
sector can negatively impact consumer protection, competition, financial stability, and market
integrity (Bank of England and FCA, 2022).
25
regimes on targeting outcomes, there is a greater need for more and better information. These
same information resources low-
probability but high-impact events that cause the most harm to markets and consumers (Black,
2014).
27
6.3 Regulatory Testing of AI
In order to better understand the benefits, threats and challenges of regulating AI and regulating
through AI, a common approach explored is regulatory testing of AI. To achieve this, regulators
worldwide have begun implementing
new technologies such as AI. This focus towards regulatory testing and learning as opposed to
potential of new technologies, the need for a more responsive regulatory design, and the
growing interest in innovative products and services. The examples provided in this section
will largely be grounded in the context of financial regulation. However, many of the themes
and principles can also be extrapolated to regulation in other sectors.
Testing environments such as test beds, living labs, and sandboxes have provided an avenue
for evidence-based regulatory testing to support innovation and regulatory governance. Each
testing environment has its own distinctive features that can support regulatory decision-
making and learning by bringing in various stakeholders (Kert et al, 2022).
Digital Sandboxes are environments that provide a controlled space for experimentation,
development, analysis and evaluation. There have been a plethora of Digital Sandboxes that
regulators around the world have developed. These sandboxes have had various use cases and
seen strong industry engagement. The Monetary Authority of Singapore (MAS) recently
launched Project Guardian, a collaboration with the financial industry to explore the economic
potential of asset tokenisation (representing assets through a smart contract on a blockchain).
The FCA held a 3-week DataSprint, which convened 120 participants from across regulated
firms, start-ups, academia, professional services, data scientists and subject matter experts to
collaborate on developing high-quality synthetic financial datasets to be used by the members
of the digital sandbox pilot (FCA, 2020). Members of the digital sandbox gain access to a suite
of features such as readily accessible synthetic data assets; an Integrated Development
Environment (IDE) with access to solution development tools; an observation deck to view the
in-flight testing of solutions; a showcase page to examine solutions relating to different themes;
an ecosystem of relevant stakeholders in order to facilitate solution development from both
technological and conceptual angles; and an application programming interface (API) where
vendors can list their solutions and APIs to encourage greater interoperability and foster a
thriving ecosystem. To provide an example of the solutions that have emerged from the digital
sandbox, the firm Financial Network Analytics developed a solution that uses NNs to establish
the usual patterns of behaviour between organisations and individuals to highlight anomalies
that can be used to detect fraudulent payments (FCA, 2021).
sandbox is for firms at the early proof of concept stage, whereas the regulatory sandbox helps
firms prepare to take their services to the market. The digital sandbox can be seen as a
mechanism to support the early testing of emerging technologies using the development
features and the datasets available and forms part of the experimentation wing of the regulator.
participants access to datasets for solution development and validation. TechSprints are
regulatory-led hackathons that facilitate collaboration among experts across and outside
financial services to identify and develop solutions to key problem statements. These solutions
often form proofs of concept that regulators and industry can explore and develop further.
-and-learn approach to emerging technologies to
understand its potential in addressing challenges and unveiling opportunities. It is geared more
towards understanding the various possibilities in which emerging technologies can be
harnessed to meet desirable outcomes instead of one that is immediately ready to implement
28
and fit for purpose. Instead, it establishes the groundwork for new use cases to build upon by
understanding the art of the possible. Some notable examples of TechSprints have been on
, which shed light on the potential of Privacy
Enhancing Technologies to facilitate sharing information about money laundering and
financial crime concerns while remaining compliant with data security laws.
The two examples above, alongside other regulatory testing initiatives, have some common
themes and practices that tend to underpin them.
preference for a particular technology or solution driving the process. As opposed to building
a technological solution and finding ways to apply it, regulatory approaches tend to identify
the problems first and start considering solutions that could potentially address those problems.
Second, it is often an
focus on ensuring that consumer privacy and protection are kept at the core while exploring
the possible ways solutions can address a particular problem statement. More importantly, it
also acknowledges that emerging technologies may not offer the best solution out of a range of
options in some cases. Third, it helps explore which technologies could lend themselves to
ideration for the future.
This is especially relevant when understanding the implications of scaling up prototypes,
developing an operational tool, and maintaining it over time. Fourth, there is a strong
component of learning from other players' experiences beyond financial services, including
other regulators (BEIS, 2020). Finally, there is a forward-looking aspect which is still very
much grounded in existing tooling capabilities (BEIS, 2022), allowing regulators and other
stakeholders to explore the trajectory for technologies concerning specific use cases and,
consequently which areas could benefit from further policy guidance.
At its core, regulatory testing aims to understand how an uncertain future can impact specific
outcomes within an industry. While new technologies bring more uncertainty about their
implications, it is also worth noting that it is a challenge that has always arisen in response to
any change, and it has been met with approaches that involve scenario analysis and hypothesis
testing. As such, the principles underlying innovative approaches to testing new technologies
remain fairly similar, even if the approaches taken might become more advanced as they iterate.
Increasingly, sandboxes for specific types of technologies are becoming more widely adopted.
The EU AI Act is a recently proposed regulatory framework for AI that aims to promote the
development and adoption of trustworthy and ethical AI systems while ensuring that these
systems are developed and used responsibly and transparently. The AI Act includes several
key provisions, including requirements for risk assessment, transparency, human oversight,
and data protection.
A key element of the AI Act is the proposal for EU member states to set up national AI
regulatory sandboxes to provide a platform for companies to test their AI systems in a
controlled environment without facing the full burden of regulatory compliance. These
sandboxes aim to encourage innovation while ensuring that AI systems are developed
responsibly and safely (European Parliament, 2022).
Similarly, the European Commission has recently launched the European Blockchain
Regulatory Sandbox for innovative use cases involving Distributed Ledger Technologies
(DLT) in order to establish a pan-European framework to facilitate the dialogue between
regulators and innovators for private and public sector use cases (European Commission,
2023).
These initiatives fall under the wider bucket of anticipatory regulation and involve engaging
29
with stakeholders, monitoring trends and developments in the market, developing new
regulatory frameworks and sandboxes to support emerging technologies, promoting
collaboration between industry participants and regulators, and actively shaping the regulatory
environment to promote innovation, competition, and consumer protection (OECD, 2020;
Nesta, 2020).
In moving towards anticipatory regulation, regulators are increasingly becoming "market
makers" rather than "market fixers" (Mazzucato, 2016). This concept was introduced by
economist Mariana Mazzucato, who argues that regulators should take a more proactive role
in shaping markets rather than simply responding to market failures or crises.
According to this concept, regulators should promote innovation and investment in key areas,
such as green technologies, healthcare, and education, by providing the necessary
infrastructure, funding, and regulatory frameworks to support these industries. This approach
involves a greater emphasis on collaboration between regulators, industry stakeholders, and
other actors in the market rather than relying solely on top-down regulation.
In understanding the strategic rationale for the regulator in expanding into the realm of tech
exploration, the considerations need to be rooted in the regulatory objectives. Ultimately, the
regulatory objectives drive and justify the undertaking of these initiatives. Most regulators have
a mandate to protect consumers and enhance market integrity, with the UK FCA having a third
objective to promote competition in financial services in the interest of consumers. In a rapidly
changing landscape, with technologies like blockchain, AI, and quantum computing becoming
increasingly disruptive while also posing many opportunities, there is a role for the regulator
in keeping pace with these developments not just as an observer but as an active player in
channelling the use of these technologies down the right and responsible avenues.
Anticipatory regulation is not just about becoming aware of risks and developments earlier but
also carries - t. Beyond more formalised procedures of standards
and legislation, the act of regulatory signalling in itself has a market-making component.
Through initiatives such as a digital sandbox programme, or a TechSprint initiative, the
regulator can signal that they would like to see more innovation in a specific area while actively
providing guidance and policy steers. In signalling their appetite to encourage innovation and
help provide the right environment for firms to optimise their development, regulators can
actively shape the currents of innovation while learning about new technologies. As such, while
regulatory innovation had started primarily serving a learning purpose, it can also become an
influencing force.
In conclusion, the idea of regulators as "market makers" is particularly relevant in emerging
technologies, such as AI, blockchain, and fintech. These technologies are rapidly transforming
the financial industry and creating new opportunities for innovation, but they also raise
important regulatory challenges, such as data privacy, cybersecurity, and consumer protection.
As Ramos and Mazzucato (2022) note, while AI applications can improve the world, with no
effective rules, they may create new inequalities or reproduce unfair social biases. Similarly,
market concentration may be another concern, with AI development being dominated by a
selected
underpinned by sound regulation
In doing so, they note that the key is to equip the policymakers to manage how AI systems are
deployed rather than always playing catch up.
As the technological landscape evolves, the role of the regulator and the parameters within
which it operates will also become increasingly blurry. There will be a need for guidance from
the regulator around best practices, standards, and ethical considerations concerning new
30
technologies. Consequently, regulatory experimentation will only increase in the future and
become more collaborative and data-led. Ultimately, if a regulation has to become more
forward-
helping create the rules and parameters of the game.
7 Recommendations
7.1 Academia
Academia has a strong role to play in supporting the regulation of AI and the research and
development of AI to support financial regulation. Key to this role will be the active
engagement with regulators and industry, and there are many good examples to build upon.
We recommend further initiatives to support collaboration, including cross regulatory-
academic secondments to understand the ways of working and share learnings, as has been the
case in the project that supported this report. Other recommendations for academia are:
Develop models, frameworks and recommendations for Responsible AI, which address
issues around fairness and accountability.
Propose how we can integrate AI with blockchain and De-Fi, which can improve the
efficiency of both technologies and help utilise their potential better.
Behavioural and experimental finance researchers need to investigate how AI results
and descriptions must be presented so that customers develop trust and finally perceive
the product as attractive.
Development of explainable AI and interpretable AI: the current status of Explainable
AI (XAI) requires significant time to run and is expensive. The Defense Advanced
Research Projects Agency (DARPA) has invested 50 Million USD and launched a 5-
-
-in-the-
Focus on developing AI-based models combining different AI techniques while
factoring in human intelligence. Scholars have agreed that combining AI techniques
can create more accurate models, strengthening trust in AI systems.
Developing AI models which adequately address issues concerning explainability and
transparency (see Milana and Ashta, 2021, for example)
7.2 Industry
Despite the outlined benefits of AI for the finance sector, industry reports, and academic studies
as cost reduction and
process optimization (PwC, 2020). More opportunities lie ahead, which can be pursued more
productively if the challenges outlined above are overcome. In particular, we strongly
recommend stronger engagement with academia and regulatory bodies, especially regarding
emerging technologies, projects and applications and their uses. Knowledge sharing (with
appropriate commercial and regulatory safeguards) will advance the market and society. We
also make the following recommendations to financial organisations:
Be aware of data privacy challenges when developing and deploying AI models. Be
aware of the unintended consequences and potential pitfalls associated with using AI.
Bring human-in-the-loop (intervention): this is vital for several reasons 1) Human
(that is, False Positives and
False Negatives, respectively). 2) Builds trust in machine learning models. 3) Ensures
31
accountability for decisions. 4) Ensures adequate evidence exists to deliver
consequential regulatory actions. 5) Process privileged information and decisions safely
and securely.
Ensure effective governance framework within organisations and on industry level
(e.g., Model Risk Management and data quality validation): include effective
assessment and reviews of ML models from development to deployment. Ensure
technical skills training of employees, i.e., train employees how to use AI-based
systems and be aware of AI ethics.
Understand better the threats AI can bring regarding systemic risk to the financial
systems.
Ensure accountability, verifiability, and algorithms, data, and design process
evaluation.
7.3 Regulators
Regulators should move from a reactive to a more proactive approach to understanding
emerging technologies such as AI in terms of both opportunities and challenges. Such a
proactive approach can help regulators understand how best to regulate them. Regulatory
intervention can address the threats and challenges associated with using AI in the finance
sector.
Correct the most salient unintended consequences of the use of AI based on a risk-based
approach (regulate strictly only high risks).
Promote fair competition between FinTech using AI and traditional financial
institutions.
Strike a balance between AI overregulation and promoting AI development and use in
finance.
Understand better the opportunities and threats of AI through regulatory experimenting.
Assess the opportunities and challenges of regulating through AI.
Ensure customer protection: regulate both financial institutions and algorithm
providers.
Address ethical concerns surrounding the use of AI in the finance sector and consider
customer perception and trust when developing regulations for AI use in finance.
Foster collaboration between regulators and AI developers. This could build upon
existing mechanisms, -Private Forum
(AIPPF) or the Veritas initiative bringing together MAS and the financial industry in
Singapore to strengthen internal governance of the application of AI.
Develop a regulatory framework for data sharing that balances privacy concerns with
the need for data sharing.
Ensure international coordination and consistency of regulations for AI in finance.
32
References
Abbeel, P., Quigley, M., Ng, A.Y., 2006. Using inaccurate models in reinforcement learning,
es in
Competitive Markets. Contributions to Finance and Accounting, Springer, Cham, pp. 327-
340.
Anagnoste, S., 2018. Setting Up a Robotic Process Automation Center of Excellence. Manag.
Dyn. Knowl. Econ. 6, 307 332.
Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A., 2017. Deep Reinforcement
Learning: A Brief Survey. IEEE Signal Process. Mag. 34, 26 38.
[Link]
Ashta, A. and Herrmann, H., 2021. Artificial intelligence and fintech: An overview of
opportunities and risks for banking, investments, and microfinance, Strategic Change,
Volume 30, Issue 3, pp. 211-222.
Ashta, A., Herrmann, H., 2021. Artificial intelligence and fintech: An overview of
opportunities and risks for banking, investments, and microfinance. Strategic Change 30,
211 222. [Link]
Ashta, A., Herrmann, H., 2021. Artificial intelligence and fintech: An overview of
opportunities and risks for banking, investments, and microfinance. Strategic Change 30,
211 222. [Link]
Aziz, S., Dowling, M., Hammami, H., and Piepenbrink, A., 2022. Machine learning in finance:
A topic modelling approach. European Financial Management, Volume 28, Issue 2, Pages
744-770.
Baghdasaryan, V., Davtyan, H., Grigoryan, A., and Khachatryan, K., 2021. Comparison of
33
econometric and deep learning approaches for credit default classification. Strategic
Change, Volume 3, Issue 3, pp. 257-268
Bahrammirzaee, A., 2010. A comparative survey of artificial intelligence applications in
finance: artificial neural networks, expert system and hybrid intelligent systems. Neural
Comput & Applic 19, 1165 1195. [Link]
Bahrammirzaee, A., 2010. A comparative survey of artificial intelligence applications in
finance: artificial neural networks, expert system and hybrid intelligent systems. Neural
Comput & Applic 19, 1165 1195. [Link]
Bank for International Settlements, 2021. SupTech tools for prudential supervision and their
use during the pandemic ailable at: [Link]
Bank of England (2022) DP5/22 - Artificial Intelligence and Machine Learning. Available at:
[Link]
regulation/publication/2022/october/artificial-intelligence
Bao, W., Lianju, N., Yue, K., 2019. Integration of unsupervised and supervised machine
learning algorithms for credit risk assessment. Expert Syst. Appl. 128, 301 315.
[Link]
Barclays, 2019. Artificial Intelligence AI payments: Barclays Corporate, Artificial Intelligence
AI Payments | Barclays Corporate. Available at:
[Link]
revolution/#speedandefficiencyofpayments.
Bazarbash, M., 2019. Fintech in financial inclusion: machine learning applications in assessing
credit risk. International Monetary Fund.
BEIS (2020) The Use of Emerging Technologies For Regulation, [Link]. Available at:
[Link]
data/file/926585/[Link].
BEIS, 2021. The Potential Impact of Artificial Intelligence on UK Employment and the
Demand for Skills. BEIS.
[Link]
data/file/974907/EYFS_framework_-_March_2021.pdf.
BEIS, 2022 Regulatory Horizons Council publishes New report on unlocking uk innovation,
[Link]. Available at: [Link]
council-publishes-new-report-on-unlocking-uk-innovation
Bennett, D., Niv, Y., Langdon, A.J., 2021. Value-free reinforcement learning: policy
optimization as a minimal model of operant behavior. Curr. Opin. Behav. Sci., Value based
decision-making 41, 114 121. [Link]
Berkey, R., Douglass, G. and Reilly, A., 2019.
[Link]/gb-en/insights/artificial-intelligence/ai-investments
Bezerra, P.C.S., Albuquerque, P.H.M., 2017. Volatility forecasting via SVR GARCH with
mixture of Gaussian kernels. Comput Manag Sci 14, 179 196.
[Link]
, F., 2020. Artificial Intelligence Innovation in Financial Services.
[Link]
Bisht, G., Kumar, S., 2022. Fuzzy Rule-Based Expert System for Multi Assets Portfolio
Optimization, in: Rushi Kumar, B., Ponnusamy, S., Giri, D., Thuraisingham, B., Clifton,
34
C.W., Carminati, B. (Eds.), Mathematics and Computing, Springer Proceedings in
Mathematics & Statistics. Springer Nature, Singapore, pp. 319 333.
[Link]
Black, J., 2014. LSE Legal Studies Working Paper No.
24/2014. DOI: 10.2139/ssrn.2519934
Blake, K., Gharbawi, M., Thew, O., Visavadia, S., Gosland, L., Mueller, H., 2022. Machine
learning in UK financial services [WWW Document]. URL
[Link]
Blei, D.M., 2003. Latent Dirichlet Allocation. J. Mach. Learn. 3, 993 1022.
BoE and FCA ,2022, The AI public-private forum: Final report, Bank of England. Available
at: [Link]
36
ons
European Parliament, 2022. Artificial Intelligence Act and regulatory sandboxes: Think tank:
European parliament, Think Tank | European Parliament. Available at:
[Link]
Fabri, L., Wenninger, S., Kaymakci, C., Beck, S., Klos, T., and Wetzstein, S. (2022). Potentials
and Challenges of Artificial Intelligence in Financial Technologies, MCIS 2022
Proceedings. 14.
Fabri, L., Wenninger, S., Kaymakci, C., Beck, S., Klos, T., Wetzstein, S., 2022. Potentials and
challenges of artificial intelligence in financial technologies. MCIS 2022 Proceedings.
Fabri, L., Wenninger, S., Kaymakci, C., Beck, S., Klos, T., Wetzstein, S., 2022. Potentials and
challenges of artificial intelligence in financial technologies. MCIS 2022 Proceedings.
Fang, F., Dutta, K., Datta, A., 2014. Domain Adaptation for Sentiment Classification in Light
of Multiple Sources. Inf. J. Comput. 26, 586 598. [Link]
FCA, 2020. Digital Sandbox Pilot: FCA DataSprint, FCA. Available at:
[Link]
FCA, 2021. Supporting innovation in financial services: The Digital Sandbox, FCA. Available
at: [Link]
FCA, 2023. Synthetic Data Call for Input Feedback Statement. Available at:
[Link]
Fenwick, Mark and Kaal, Wulf A. and Vermeulen, Erik P.M., Regulation Tomorrow: What
Happens When Technology is Faster than the Law?, 2017. American University Business
Law Review, Vol. 6, No. 3, 2017, Lex Research Topics in Corporate Law & Economics
Working Paper No. 2016-8, U of St. Thomas (Minnesota) Legal Studies Research Paper
No. 16-23, TILEC Discussion Paper No. 2016-024, Available at SSRN:
[Link] or [Link]
Ferran, E., 2023.
Journal of Financial Regulation, DOI: 10.1093/jfr/fjad001
FINRA, 2020. Artificial Intelligence (AI) in the Securities Industry
[Link]
Fischer, T., and C. Krauss., -term Memory Networks
for Fi
654 669.
Flavián, C., Pérez-Rueda, A., Belanche, D., & Casaló, L. V., 2022. Intention to use analytical
artificial intelligence (AI) in services the effect of technology readiness and awareness.
Journal of Service Management, 33(2), 293-320.
Forst, M., Kaplan, R.M., 2006. The importance of precise tokenizing for deep grammars.
Presented at the LREC, pp. 369 372.
FSB, 2020. The Use of Supervisory and Regulatory Technology by Authorities and Regulated
institutions [Link]
Fu, R., Huang, Y., Singh, PV, 2021. Crowds, Lending, Machine, and Bias. Information
Systems Research, Vol. 32, No. 1.
Furman, J., Seamans, R., 2019. AI and the Economy. Innovation Policy and the Economy 19,
161 191. [Link]
37
Artificial Neural Networks for improved stock price prediction. Expert Systems with
Applications 44, 320 331. [Link]
Gomes, C., Jin, Z., Yang, H., 2021. Insurance fraud detection with unsupervised deep learning.
J. Risk Insur. 88, 591 624. [Link]
Gómez Martínez, R., Prado Román, M., & Plaza, C. P., 2019. Big data algorithmic trading
systems based on investors' mood. Journal of Behavioral Finance, 20(2), 227 238.
Gorenc Novak, M., Veluscek, D.,, 2016. Prediction of stock price movement based on daily
high prices. Quant. Financ. 16 (5), pp. 793 826.
[Link]
Gotthardt, M., Koivulaakso, D., Paksoy, O., Saramo, C., Martikainen, M., Lehner, O., 2020.
Current State and Challenges in the Implementation of Smart Robotic Process Automation
in Accounting and Auditing. ACRN J. Finance Risk Perspect.
[Link]
Grady, P., 2023. The AI Act Should Be Technology-Neutral. Report. Center for Data
Innovation. Can be retrieved here: [Link]
should-be-technology-neutral/
Greenspan, H., Van Ginneken, B., Summers, R.M., 2016. Guest Editorial Deep Learning in
Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE
Transactions on Medical Imaging 35, 1153 1159.
[Link]
Güler, K., & Tepecik, A., 2019. Exchange rates' change by using economic data with artificial
intelligence and forecasting the crisis. Procedia Computer Science, 158, 316 326.
Habert, B., Adda, G., Adda-Decker, M., de Mareuil, P.B., Ferrari, S., Ferret, O., Illouz, G.,
Paroubek, P., 1998. Towards Tokenization Evaluation. Presented at the LREC, pp. 427
432.
Hamann, F., 2021. 5 fintech and bank partnerships that are generating revenue. Subaio. URL
[Link]
revenue
Hasselt, H. van, Guez, A., Silver, D., 2016. Deep Reinforcement Learning with Double Q-
Learning. Proc. AAAI Conf. Artif. Intell. 30. [Link]
Hofmann, T., 2001. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Mach.
Learn. 42, 177 196. [Link]
[Link]
Huang, X., Liu, X., & Ren, Y., 2018. Enterprise credit risk evaluation based on neural network
algorithm. Cognitive Systems Research, 52, pp. 317 324.
Islam, S.R., 2018. A Deep Learning Based Illegal Insider-Trading Detection and Prediction
Technique in Stock Market. CoRR
Jackson, P., 1986. Introduction to expert systems.
Jan, C.-L. Detection of Financial Statement Fraud Using Deep Learning for Sustainable
Development of Capital Markets under Information Asymmetry. Sustainability, 2021, 13,
9879. [Link]
38
Jemli, R., Chtourou, N., Feki, R., 2010. Insurability Challenges Under Uncertainty: An Attempt
to Use the Artificial Neural Network for the Prediction of Losses from Natural Disasters.
Panoeconomicus 57, 43 60. [Link]
Jullum, M., Løland, A., Huseby, R.B., Ånonsen, G., Lorentzen, J., (2020). Detecting money
laundering transactions with machine learning. J. Money Laund. Control 23(1), 173 186.
[Link]
Juneja, P., 2021. Disadvantages of Artificial Intelligence in Commercial Banking.
Management Study Guide. [Link]
[Link]
Jussupow, E. et al., 2022. Business &
Information Systems Engineering, 64, pp. 293-309. DOI: 10.1007/s12599-022-00750-2
Kalyanakrishnan, S., Panicker, R.A., Natarajan, S., Rao, S., 2018. Opportunities and
Challenges for Artificial Intelligence in India, in: Proceedings of the 2018 AAAI/ACM
39
Finance 27, 100352. [Link]
Kotsiantis, S.B., 2007. Supervised Machine Learning: A Review of Classification Techniques.
Informatica 31.
Krauss, C., Do, X.A., Huck, N., 2017. Deep neural networks, gradient-boosted trees, random
forests: Statistical arbitrage on the S&P 500. Eur. J. Oper. Res. 259, 689 702.
[Link]
Kruse, L., Wunderlich, N., Beck, R., 2019. Artificial Intelligence for the Financial Services
Industry: What Challenges Organizations to Succeed.
Kruse, L., Wunderlich, N., Beck, R., 2019. Artificial Intelligence for the Financial Services
Industry: What Challenges Organizations to Succeed.
Kuiper, Ouren & van der Burgt, Joost & Leijnen, Stefan & van den Berg, Martin., 2021.
Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory
Authorities.
Kumar, A., Zhou, A., Tucker, G., Levine, S., 2020. Conservative Q-Learning for Offline
Reinforcement Learning, in: Advances in Neural Information Processing Systems. Curran
Associates, Inc., pp. 1179 1191.
Kumar, M., Thenmozhi, M., 2014. Forecasting stock index returns using ARIMA-SVM,
ARIMA-ANN, and ARIMA-random forest hybrid models. International Journal of
Banking, Accounting and Finance 5, 284 308.
[Link]
Kvamme, H., Sellereite, N., Aas, K., Sjursen, S., 2018. Predicting mortgage default using
convolutional neural networks. Expert Syst. Appl. 102, 207 217.
[Link]
Lahmiri, S., and Bekiros, S., 2019. Can machine learning approaches predict corporate
bankruptcy? Evidence from a qualitative experimental design. Quantitative Finance, 19(9),
pp. 1569 1577.
Lamberton, C., Brigo, D., Hoy, D., 2017. Impact of Robotics, RPA and AI on the Insurance
Industry: Challenges and Opportunities.
Lamberton, C., Brigo, D., Hoy, D., 2017. Impact of Robotics, RPA and AI on the Insurance
Industry: Challenges and Opportunities. SSRN Scholarly Paper.
Lee, I., 2017. Big data: Dimensions, evolution, impacts, and challenges. Business Horizons 60,
293 303. [Link]
Lee, I., Shin, Y.J., 2020. Machine learning for enterprises: Applications, algorithm selection,
and challenges. Bus. Horiz., Artificial Intelligence and Machine Learning 63, 157 170.
[Link]
Lee, I., Shin, Y.J., 2020. Machine learning for enterprises: Applications, algorithm selection,
and challenges. Business Horizons, Artificial Intelligence and Machine Learning 63, 157
170. [Link]
Legg, M. & Bell, F., 2019. -
The University of Tasmania Law Review, 38(2). Available at:
[Link]
Lepenioti, K. et al., 2020.
International Journal of Information Management, 50, pp. 57-70. DOI:
40
10.1016/[Link].2019.04.003
Levine, S., Abbeel, P., 2014. Learning Neural Network Policies with Guided Policy Search
under Unknown Dynamics, in: Advances in Neural Information Processing Systems.
Curran Associates, Inc.
Levine, S., Finn, C., Darrell, T., Abbeel, P., 2016. End-to-end training of deep visuomotor
policies. J. Mach. Learn. Res. 17, 1334 1373.
Li, W., & Mei, F., 2020. Asset returns in deep learning methods: An empirical analysis on SSE
50 and CSI 300. Research in International Business and Finance, 54, 101291.
Li, Y., Jiang, W., Yang, L., and Wu, T., 2018. On neural networks and learning systems for
business computing. Neurocomputing, 275, pp. 1150 1159
Liang, L., Wu, D., 2005. An application of pattern recognition on scoring Chinese corporations
financial conditions based on backpropagation neural network. Computers & Operations
Research 32, 1115 1129. [Link]
port [WWW Document]. URL
[Link] Report
Luo, S., Lin, X., and Zheng, Z., 2019. A novel CNN-DDPG based AI-trader: Performance and
roles in business operations. Transportation Research Part E: Logistics and Transportation
Review, 131, 68 79.
Mah, P.M., Skalna, I., Muzam, J., Song, L., 2022. Analysis of Natural Language Processing in
the FinTech Models of Mid-21st Century. J. Inf. Technol. Digit. World 4, 183 211.
[Link]
Mahmoud, M., Algadi, N., Ali, A., 2008. Expert System for Banking Credit Decision, in: 2008
International Conference on Computer Science and Information Technology. Presented at
the 2008 International Conference on Computer Science and Information Technology, pp.
813 819. [Link]
Mann, M., & Matzner, T., 2019. Challenging algorithmic profiling: The limits of data
protection and anti-discrimination in responding to emergent discrimination. Big Data &
Society, 6(2), 2053951719895805.
Mansour, K., 2020. 4 ways NLP technology can be leveraged for insurance > early metrics,
Early Metrics. Available at: [Link]
leveraged-for-insurance/
Mazzarisi, P., Ravagnani, A., Deriu, P., Lillo, F., Medda, F. and Russo, A., 2022. A machine
learning approach to support decision in insider trading detection (Metodi sperimentali di
machine learning per supportare le decisioni nella detection degli abusi di mercato)
CONSOB-Scuola Normale Superiore di Pisa. CONSOB Fintech Series,
McCallum, A., Li, W., 2003. Early results for named entity recognition with conditional
random fields, feature induction and web-enhanced lexicons, in: Proceedings of the
Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Presented at
the the seventh conference, Association for Computational Linguistics, pp. 188 191.
[Link]
Mckinsey, 2020. Reimagining customer engagement for the AI bank of the future (2020).
Available at:
[Link]
0Insights/Reimagining%20customer%20engagement%20for%20the%20AI%20bank%20
41
of%20the%20future/[Link].
Mei, L., 2022. Fintech Fundamentals: Big Data / Cloud Computing / Digital Economy. Stylus
Publishing, LLC.
Metaxiotis, K., Psarras, J., 2003. Expert systems in business: applications and future directions
for the operations researcher. Ind. Manag. Data Syst. 103, 361 368.
[Link]
Milana, C. and Ashta, A., 2022. Artificial intelligence techniques in finance and financial
markets: A survey of the literature. Strategic Change
Milana, C., Ashta, A., 2021. Artificial intelligence techniques in finance and financial markets:
A survey of the literature. Strategic Change 30, 189 209. [Link]
Milana, C., Ashta, A., 2021. Artificial intelligence techniques in finance and financial markets:
A survey of the literature. Strategic Change 30, 189 209. [Link]
Milana, C., Ashta, A., 2021. Artificial intelligence techniques in finance and financial markets:
A survey of the literature. Strategic Change 30, 189 209. [Link]
Milana, C., Ashta, A., 2021. Artificial intelligence techniques in finance and financial markets:
A survey of the literature. Strategic Change 30, 189 209.
[Link] S., Tyagi, S., 2019. Performance Evaluation of
Machine Learning Algorithms for Credit Card Fraud Detection, in: 2019 9th International
Conference on Cloud Computing, Data Science & Engineering (Confluence). Presented at
the 2019 9th International Conference on Cloud Computing, Data Science & Engineering
(Confluence), pp. 320 324. [Link]
Mokhatab Rafiei, F., Manzari, S.M., Bostanian, S., 2011. Financial health prediction models
using artificial neural networks, genetic algorithm and multivariate discriminant analysis:
Iranian evidence. Expert Systems with Applications 38, 10210 10217.
[Link]
Mor, S., and Gupta, G., 2021. Artificial intelligence and technical efficiency: The case of
Indian commercial banks. Strategic Change, 30(3), pp. 235 245.
Muscoloni, A., Thomas, J.M., Ciucci, S., Bianconi, G., Cannistraci, C.V., 2017. Machine
learning meets complex networks via coalescent embedding in the hyperbolic space. Nat.
Commun. 8, 1615. [Link]
Nesta, 2020 Anticipatory regulation, nesta. Available at:
[Link]
Nevmyvaka, Y., Feng, Y., Kearns, M., 2006. Reinforcement learning for optimized trade
execution, in: Proceedings of the 23rd International Conference on Machine Learning,
680.
[Link]
Nikolopoulos, K., Petropoulos, F., Assimakopoulos, V., 2008. An expert system for forecasting
mutual funds in Greece. Int. J. Electron. Finance 2, 404 418.
[Link]
Nori, H. et al -
10.48550/arXiv.2303.13375
Nwogugu, M., 2006. Decision-making, risk and corporate governance: New dynamic
models/algorithms and optimization for bankruptcy decisions. Applied Mathematics and
Computation 179, 386 401. [Link]
42
OECD, 2019. The role of sandboxes in promoting flexibility and innovation in the digital age
OECD, 2020 Anticipatory innovation governance: Shaping the future through proactive policy
making, OECD. Available at: [Link]
[Link]
Óskarsdóttir, M., Bravo, C., Sarraute, C., Vanthienen, J. and Baesens, B., 2019. The value of
big data for credit scoring: Enhancing financial inclusion using mobile phone data and
social network analytics. Applied Soft Computing, 74, [Link], Joerg, A
Primer on Natural Language Processing for Finance. Available at SSRN:
[Link] or [Link]
Oza, D., Padhiyar, D., Doshi, V., Patil, S., 2020. Insurance Claim Processing Using RPA Along
With Chatbot. SSRN Sch. Pap. [Link]
Patil, K., and Kulkarni, M. (2019). Artificial intelligence in financial services: Customer
chatbot advisor adoption. International Journal of Innovative Technology and Exploring
Engineering, 9(1), pp. 4296 4303
Pendharkar, P.C., 2005. A threshold-varying artificial neural network approach for
classification and its application to bankruptcy prediction problem. Computers &
Operations Research, Applications of Neural Networks 32, 2561 2582.
[Link]
Petrelli, D., and Cesarini, F. (2021) Artificial intelligence methods applied to financial assets
price forecasting in trading contexts with low (intraday) and very low (high-frequency)
time frames, Strategic Change, Volume 30, Issue 3, pp. 247-256
Pirilä, T., Salminen, J., Osburg, V.S., Yoganathan, V. and Jansen, B.J., 2022, January. The
Role of Technical and Process Quality of Chatbots: A Case Study from the Insurance
Industry. In Proceedings of the 55th Hawaii International Conference on System Sciences.
Prince, A. E., & Schwarcz, D., 2019. Proxy discrimination in the age of artificial intelligence
and big data. Iowa L. Rev., 105, 1257.
Prove., 2021. How Banks & Regulators Are Applying Machine Learning [WWW Document].
URL [Link]
PwC, 2020. How mature is AI adoption in financial services? Study. Can be retrieved here:
[Link]
[Link]
Radke, A.M., Dang, M.T., Tan, A., 2020. Using robotic process automation (RPA) to enhance
Item master data maintenance process. LogForum 16.
[Link]
Ramos, G. and Mazzucato, M., 2022. AI in the common interest, Project Syndicate. Available
at: [Link]
frameworks-capacity-building-by-gabriela-ramos-and-mariana-mazzucato-2022-12
Rawat, S., Rawat, A., Kumar, D., & Sabitha, A. S., 2021. Application of machine learning and
data visualization techniques for decision support in the insurance sector. International
Journal of Information Management Data Insights, 1(2), 100012.
Reed, Chris., 2007. Taking Sides on Technology Neutrality. SCRIPT-ed. 4.
10.2966/scrip.040307.263.
Riikkinen, M., Saarijärvi, H., Sarlin, P. and Lähteenmäki, I., 2018, "Using artificial intelligence
to create value in insurance", International Journal of Bank Marketing, Vol. 36 No. 6, pp.
43
1145-1168. [Link]
Romao, M., Costa, J., Costa, C.J., 2019. Robotic Process Automation: A Case Study in the
Banking Industry, in: 2019 14th Iberian Conference on Information Systems and
Technologies (CISTI). Presented at the 2019 14th Iberian Conference on Information
Systems and Technologies (CISTI), pp. 1 6.
[Link]
Ruan, Q., Wang, Z., Zhou, Y., & Lv, D., 2020. A new investor sentiment indicator (ISI) based
on artificial intelligence: A powerful return predictor in China. Economic Modelling, 88,
27 58.
Ruffolo, M., 2022. The Role of Ethical AI in Fostering Harmonic Innovations that Support a
Human-Centric Digital Transformation of Economy and Society, in: Cicione, F., Filice,
L., Marino, D. (Eds.), Harmonic Innovation: Super Smart Society 5.0 and Technological
Humanism, Lecture Notes in Networks and Systems. Springer International Publishing,
Cham, pp. 139 143. [Link]
Ryll, L., Barton, M.E., Zhang, B.Z., McWaters, R.J., Schizas, E., Hao, R., Bear, K., Preziuso,
M., Seger, E., Wardrop, R., Rau, P.R., Debata, P., Rowan, P., Adams, N., Gray, M.,
Yerolemou, N., 2020. Transforming Paradigms: A Global AI in Financial Services Survey.
[Link]
Ryll, L., Barton, M.E., Zhang, B.Z., McWaters, R.J., Schizas, E., Hao, R., Bear, K., Preziuso,
M., Seger, E., Wardrop, R., Rau, P.R., Debata, P., Rowan, P., Adams, N., Gray, M.,
Yerolemou, N., 2020. Transforming Paradigms: A Global AI in Financial Services Survey.
[Link]
Saeys, Y., Van Gassen, S., Lambrecht, B.N., 2016. Computational flow cytometry: helping to
make sense of high-dimensional immunology data. Nat. Rev. Immunol. 16, 449 462.
[Link]
Sang, E.F.T.K., De Meulder, F., 2003. Introduction to the CoNLL-2003 Shared Task:
Language-Independent Named Entity Recognition.
[Link]
Sazali, S.S., Rahman, N.A., Bakar, Z.A., 2016. Information extraction: Evaluating named
entity recognition from classical Malay documents, in: 2016 Third International
Conference on Information Retrieval and Knowledge Management (CAMP). Presented at
the 2016 Third International Conference on Information Retrieval and Knowledge
Management (CAMP), pp. 48 53. [Link]
Shamima, A, Muneer M. Alshater, Anis El Ammari, Helmi Hammami, 2022. Artificial
intelligence and machine learning in finance: A bibliometric review, Research in
International Business and Finance, Volume 61.
Shiue, W., Li, S.-T., Chen, K.-J., 2008. A frame knowledge system for managing financial
decision knowledge. Expert Syst. Appl. 35, 1068 1079.
[Link]
44
effective solutions for anti-money laundering and counter-terror financing initiatives in
charitable fundraising. J. Money Laund. Control. [Link]
0100.
Sotnik, S., Deineko, Z., Lyashenko, V., 2022. Key Directions for Development of Modern
Expert Systems.
Sridhar, D., Getoor, L., 2019. Estimating Causal Effects of Tone in Online Debates.
[Link]
Stone, M., Aravopoulou, E., Ekinci, Y., Evans, G., Hobbs, M., Labib, A., ... & Machtynger, L.,
2020. Artificial intelligence (AI) in strategic marketing decision-making: a research
agenda. The Bottom Line, 33(2), 183-200.
Svetlova, E. AI ethics and systemic risks in finance. AI Ethics 2, 713 725, 2022.
[Link] Common
Regulatory Capacity for AI
[Link]
07/common_regulatory_capacity_for_ai_the_alan_turing_institute.pdf
Thekkethil, M.S., Shukla, V.K., Beena, F., Chopra, A., 2021. Robotic Process Automation in
Banking and Finance Sector for Loan Processing and Fraud Detection, in: 2021 9th
International Conference on Reliability, Infocom Technologies and Optimization (Trends
and Future Directions) (ICRITO). Presented at the 2021 9th International Conference on
Reliability, Infocom Technologies and Optimization (Trends and Future Directions)
(ICRITO), pp. 1 6. [Link]
Thennakoon, A., Bhagyani, C., Premadasa, S., Mihiranga, S., Kuruwitaarachchi, N., 2019.
Real-time Credit Card Fraud Detection Using Machine Learning, in: 2019 9th
International Conference on Cloud Computing, Data Science & Engineering (Confluence).
Presented at the 2019 9th International Conference on Cloud Computing, Data Science &
Engineering (Confluence), pp. 488 493.
[Link]
Thowfeek, M.H., Samsudeen, S.N., Sanjeetha, M.B.F., 2020. Drivers of Artificial Intelligence
in Banking Service Sectors. Solid State [Link], H., Zheng, H., Zhao, K., Liu,
MW., Zeng, DD (2022). Inductive Representation Learning on Dynamic Stock Co-
Movement Graphs for Stock Predictions. INFORMS Journal on Computing, Volume 34,
Issue 4, pp. 1841-2382
Tiwari, R., Srivastava, S., & Gera, R., 2020. Investigation of artificial intelligence techniques
in finance and marketing. Procedia Computer Science, 173, pp. 149 157.
Toronto Centre, 2022. Supervisory implications of artificial intelligence and machine
learning
[Link]
[Link]
Torshin, [Link]., Rudakov, K.V., 2015. On the theoretical basis of metric analysis of poorly
formalized problems of recognition and classification. Pattern Recognit. Image Anal. 25,
577 587. [Link]
Tsai, C.-F., Wu, J.-W., 2008. Using neural network ensembles for bankruptcy prediction and
credit scoring. Expert Systems with Applications 34, 2639 2649.
[Link]
Turek, M., n.d. DARPA - Explainable Artificial Intelligence (XAI) Program, 2017. [WWW
45
Document]. URL [Link]
-leading position in technologies of Tomorrow,
UKRI. Available at: [Link]
position-in-technologies-of-tomorrow/
Ulbricht, L. and Yeung, K., 2022.
Regulation & Governance, 16(1), pp.
3-22. DOI: 10.1111/rego.12437
Umuhoza, E., Ntirushwamaboko, D., Awuah, J., Birir, B., 2020. Using Unsupervised Machine
Learning Techniques for Behavioral-based Credit Card Users Segmentation in Africa.
SAIEE Afr. Res. J. 111, 95 101. [Link]
Ursachi, O., 2019. Role and applications of NLP in Cybersecurity, Medium. Medium.
Available at: [Link]
cybersecurity-333d9280c737
Van Hove, Leo, and Antoine Dubus. 2019. M-PESA and financial inclusion in Kenya: Of
paying comes saving? Sustainability 11: 568.
Vandrangi, S.K., 2022. PREDICTING THE INSURANCE CLAIM BY EACH USER USING
MACHINE LEARNING ALGORITHMS. Journal of Emerging Strategies in New
Economics, 1(1), pp.1-11.
Varma, V. and Mukherjee, S., 2022. Insider trading: the role of ai as a prevention tool.
[Link]
Vella, V., & Ng, W. L., 2015. A dynamic fuzzy money management approach for controlling
the intraday risk-adjusted performance of AI trading algorithms. Intelligent Systems in
Accounting, Finance and Management, 22(2), 153 178.
Veloso, M., Balch, T., Borrajo, D., Reddy, P., Shah, S., 2021. Artificial intelligence research
in finance: discussion and examples, Oxford Review of Economic Policy, Volume 37,
Issue 3, Pages 564 584, [Link]
Villar, A.S., Khan, N. Robotic process automation in banking industry: a case study on
Deutsche Bank. J BANK FINANC TECHNOL 5, 71 86, 2021.
[Link]
Wang, H.; Li, C.; Gu, B,; and Min, W., 2019. "Does AI-based Credit Scoring Improve Financial
Inclusion? Evidence from Online Payday Lending". ICIS 2019 Proceedings. 20.
[Link]
Webster, J.J., Kit, C., 1992. Tokenization as the Initial Phase in NLP, in: COLING 1992
Volume 4: The 14th International Conference on Computational Linguistics. Presented at
the COLING 1992.
Wei, Y., Yildirim, P., Van den Bulte, C., & Dellarocas, C., 2016. Credit scoring with social
network data. Marketing Science, 35(2), 234-258.
White, 1988. Economic prediction using neural networks: the case of IBM daily stock returns,
in: IEEE 1988 International Conference on Neural Networks. Presented at the IEEE 1988
International Conference on Neural Networks, pp. 451 458 vol.2.
[Link]
WhiteHouse. 2022. The impact of artificial intelligence on the future of workforces in the
european union and the United States of America. [Link]
content/uploads/2022/12/[Link]
46
Willcocks, L., Lacity, M., Craig, A., 2015. Robotic Process Automation at Xchanging.
Wittmann, X., and Lutfiju, F., 2021. Adopting AI in the Banking Sector The Wealth
Management Perspective. In Society 5.0. Springer International Publishing.
WorldBank. 2020. Digital Financial Inclusion. Available online:
[Link]
inclusion
Wu, D., Liang, L., Yang, Z., 2008. Analyzing the financial distress of Chinese public
companies using probabilistic neural networks and multivariate discriminate analysis.
Socio-Economic Planning Sciences 42, 206 220.
[Link] J. (2020). Application of machine
learning models and artificial intelligence to analyze annual financial statements to identify
companies of unfair corporate culture. Procedia Computer Science, 176, 3037 3046.
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J., 2019. Explainable AI: A Brief Survey
on History, Research Areas, Approaches and Challenges. Nat. Lang. Process. Chin.
Comput., Lecture Notes in Computer Science 563 574. [Link]
030-32236-6_51
Xu, J., 2022. Future And Fintech, The: Abcdi And Beyond. FUTURE FINTECH ABCDI
Beyond 1 36.
Yang, Liu, and Youtang Zhang. 2020. Digital Financial Inclusion and Sustainable Growth of
Small and Micro Enterprises
Listed Companies. Sustainability 12: 3733.
Yang, W., Fang, Z., Hui, L., 2016. Study of an Improved Text Filter Algorithm Based on Trie
Tree. Presented at the 2016 International Symposium on Computer, Consumer and Control
(IS3C), pp. 594 597. [Link]
Yu, M., Yang, Z., Kolar, M., Wang, Z., 2019. Convergent Policy Optimization for Safe
Reinforcement Learning, in: Advances in Neural Information Processing Systems. Curran
Associates, Inc.
Yunusoglu, M.G., Selim, H., 2013. A fuzzy rule based expert system for stock evaluation and
portfolio construction: An application to Istanbul Stock Exchange. Expert Syst. Appl. 40,
908 920. [Link]
Zeinalizadeh N, Shojaie AA, Shariatmadari M (2015). Modeling and analysis of bank customer
satisfaction using neural networks approach. International Journal of Bank Marketing 33:
717-732.
Zetzsche, DA, Warner, D., Buckley, R., Tang, Brian, 2020. Regulating AI in Finance: Putting
the Human in the Loop. Oxford Business Law Blog. Can be retrieved at:
[Link]
human-loop
Zhang, B.Z., Ashta, A., and Barton, M.E., 2021. Do FinTech and financial incumbents have
different experiences and perspectives on the adoption of artificial intelligence? Strategic
Change, Volume 30, Issue 3, Pages 223-234.
Zhang, Z., Zohren, S., Roberts, S., 2020. Deep Reinforcement Learning for Trading. J. Financ.
Data Sci. 2, 25 40. [Link]
Zheng, X., Zhu, M., Li, Q., Chen, C., Tan, Y., 2019. FinBrain: when finance meets AI 2.0.
Frontiers Inf Technol Electronic Eng 20, 914 924.
47
[Link]
Zhong, X., Enke, D., 2019. Predicting the Daily Return Direction of the Stock Market using
Hybrid Machine Learning Algorithms. Financial Innovation 5, 1 20.
[Link]
48