38
Journal of Economic and Banking Studies
No.8, Vol.4 (2), December 2024 pp. 38-49
©
Banking Academy of Vietnam
ISSN 2734 - 9853
Developing an ethical framework for artificial
intelligence management in the financial sector
Nguyen, Bich Ngan
Banking Academy of Vietnam
Corresponding Author.
E-mail address: ngannb@hvnh.edu.vn (Nguyen, B. N.)
Chronicle Abstract
Article history The exponential growth of artificial intelligence (AI) has yielded significant
achievements across various sectors of the economy, including finance. This
growth, however, has also introduced economic instabilities that regulators
need to address by establishing ethical frameworks. These frameworks are
essential to ensure that AI is managed properly and does not contravene the
financial regulations and legal standards of a country. Drawing on the experi-
ences of countries including the United States of America, Europe, and China
by gathering regulations publicized by authorities, this study aims to offer
recommendations for regulatory bodies on developing an ethical framework
for AI management in the financial sector, with the ultimate purpose is to use
AI in a way that is sustainable and socially responsible, thereby contribut-
ing to the establishment of a developed and sustainable financial system.
The study also discusses the potential challenges of implementing AI ethical
frameworks in organizations in the financial sector, which may raise the nec-
essary adjustments of AI ethical framework when being applied in practice.
Received
Revised
Accepted
17th Jul 2024
03rd Dec 2024
10th Dec 2024
Keywords
Ethical framework,
Artificial intelligence,
Financial sector
DOI:
10.59276/JEBS.2024.12.2685
1. Introduction
Since its inception in 1956 and over five
decades of development, the world has
witnessed the relentless advancement
of artificial intelligence (AI) within the fourth
industrial revolution era. AI has introduced
significant innovations, from automating
processes to extensive data analysis, yet it
has also presented serious ethical challenges.
Consequently, constructing a robust ethical
framework for AI across all economic and
social sectors, particularly in finance, emerges
as an imperative issue. The ethical framework
for AI management is a set of guidelines and
principles that help ensure the responsible
development, deployment, and use of AI in
finance. These frameworks aim to mitigate
potential risks while maximizing the benefits
of AI technologies in the financial sector.
The need of developing an ethical framework
for AI in the financial sector is crucial to
ensure responsible and secure implementation
of AI technologies and is mentioned in some
current studies. Issa (2020) emphasized the
need for a reasonable assurance framework
of ethical AI systems, understanding how AI
processes, especially from ethical aspects,
can impact financial information. Regulation
plays a vital role in ensuring the safe adop-
tion of AI in sectors like finance, protect-
ing human rights, existing laws, and ethical
considerations (Baronchelli, 2024). After that,
the stakeholders in the financial sector must
adopt a proactive approach in regulating AI to
prevent potential financial harm and encour-
age sustainable innovation (Truby et al., 2020).
Additionally, the ethical implications of AI in
Nguyen, Bich Ngan
39
No.8, Vol.4 (2), December 2024 - Journal of Economic and Banking Studies
finance extend to systemic risks, emphasizing
the necessity of accounting for AI-enhanced
risks ethically (Svetlova, 2022). According to
Pasupuleti (2024), collaborative efforts among
technologists, financial experts, and policy-
makers are essential to harness AI’s potential
responsibly and shape a future where finance is
more ethical and secure.
In reality world, the current prominent issues
associated with the instabilities AI introduces
in the financial sector, include: Firstly, AI’s
utilization of vast customer data for analysis
and behavioral predictions raises concerns
over privacy and data security. Hence, a clear
ethical framework is essential to ensure AI’s
safe use while respecting customer privacy.
Secondly, unchecked AI can create or exacer-
bate discrimination. In finance, this could lead
to unfair lending practices or loan conditions
based on biased algorithms. An ethical frame-
work, therefore, must clearly define criteria for
AI to operate fairly and without discrimina-
tion. Thirdly, AI’s “black box” nature, with
its decision-making and operational processes
being obscure, can erode trust- an essential
element for all organizations within the sector.
Thus, an ethical framework should promote
transparency and accountability in AI usage,
clarifying system operations for both custom-
ers and regulators. Fourthly, another ethical
concern is the potential misuse of AI for nefari-
ous purposes, such as insider trading or money
laundering. AI- enabled financial systems can
facilitate rapid and undetectable alterations to
financial data, potentially exploited for illegal
gains (Antwi et al., 2024).
The evidence above suggests a pressing need
for developing AI regulation to prevent viola-
tions of financial and legal norms within a
nation. An ethical framework must encompass
principles that allow AI to operate within legal
boundaries, while also mitigating potential
risks to financial-banking institutions, partners,
clients, and governmental regulatory bodies.
Yet, the ethical framework governing AI regu-
lations in various countries remains limited.
The European Union has taken a leading role
in defining and implementing ethical principles
for robots and AI within governmental policies
(Langman et al., 2021). In contrast, while the
United States has adopted numerous ethical
frameworks, empirical research on AI ethics in
the financial sector remains scarce (Lee, 2000).
However, the United States, along with China
and the European Union, stands as a leader
in developing legal frameworks to address
AI’s existing challenges. These countries have
systematically developed comprehensive AI
governance policies (Heymann, 2023).
The development of AI policies and ethical
frameworks has garnered significant interest
in European and North American countries,
unlike in other parts of the world where it has
received less attention (Roche et al., 2023).
The increasing focus on AI across various
economic sectors has underscored the need
to establish ethical principles for sustainable
and ethical AI practices (Cossu et al., 2021).
Moreover, public debates on AI ethics have
intensified, but a small fraction of studies have
delved into AI morality theories and principles
based on an ethical framework, revealing a
gap in this discourse and research (Hartwig et
al., 2023). The Figure 1 provides a conceptual
structure by ECB for systematically evaluating
the advantages and potential risks associated
with AI within the operations of individual
financial institutions.
To fill the gap in literature on the ethical frame-
work of AI management in finance as stated
above, this study contributes to the development
of an ethical framework for AI in the financial
sector by qualitative method. The study syn-
thesizes experiences in constructing ethical
and management frameworks for AI via legal
documents published by authorities in the US,
Europe, and China. Then, the study offers sug-
gestions for regulatory bodies to further devel-
ope an ethical framework for AI management
in the financial domain based on these findings.
Beside the conclusion and list of references, the
remainder of the research includes three main
parts: (i) Review on legal and ethical frame-
works for AI management by national govern-
ments, (ii) Recommendations on developing AI
ethical frameworks in the financial sector across
Developing an ethical framework for artificial intelligence management in the financial sector
40
Journal of Economic and Banking Studies- No.8, Vol.4 (2), December 2024
countries, (iii) Discussion the challenges of
implementing AI ethical frameworks on organi-
zations in the financial sector.
2. Overview on legal and ethical frame-
works for AI management by national
governments
Figure 2 models the relationship between the
implementation tools in the AI management
policies of national governments, thereby
demonstrating a significant resemblance
across AI management policies. Each circle
denotes for group of AI management policies
adopts by governments all around the world.
The circles located centrally and of larger size
signify policy groups that are effective across a
broader range of countries, with some policies
being implemented in almost all the nations.
These results are gained from the synthesized
database of over 1,600 AI policies ranging
from regulations to research grants to national
strategies conducted by Deloitte (2023).
Figure 3 provides a generalization of the
trends in the use of AI management tools by
governments all around the world overtime. It
Source: Leitner et al. (2024)
Figure 1. Benefits and risks of AI posed to the financial system
Source: Deloitte (2023)
Figure 2. Unified policies on AI management by national governments
Nguyen, Bich Ngan
41
No.8, Vol.4 (2), December 2024 - Journal of Economic and Banking Studies
suggestions the study discusses the challenging
impacts of implementing AI ethical frameworks
on organizations in the financial sector in part 4.
2.1. AI regulations in the United States of
America
In the United States, the federal government
along with local and state authorities are
actively developing protective measures and
designing related frameworks and policies to
simultaneously encourage AI development and
mitigate its societal harms. Specifically:
Firstly, at the federal level, AI risk assessment
currently ranks as a top priority. U.S. legisla-
tors are particularly attuned to the challenges
of understanding how to create operational
procedures based on algorithms, known as
“black boxes,” how these procedures function,
and their impact on the public. To address these
issues, the Algorithmic Accountability Act
(H.R. 5628; S.2892) is currently under debate in
Congress. Once enacted, it will require organi-
zations using AI systems to identify and explain
the decision-making processes in significant
areas impacting citizens’ lives before and after
algorithms are employed. Similarly, the DEEP
FAKES Accountability Act (H.R. 3230) and the
Digital Services Oversight and Safety Act (H.R.
6796), if passed by Congress and enacted, will
mandate organizations to be transparent about
highlights a considerable similarity within the
sample of 69 countries all around the world
(Deloitte, 2023), showing that in the past
decade, governments have been transitioning
their AI management models from “Under-
stand” policies to “Grow” models comprehen-
sively. A notable observation is the rapid de-
velopment trend of “Shape” policy types from
2018 to 2022, which, however, did not receive
attention in 2023. Even before, there were in-
terruptions in some years (e.g. 2017 or 2015).
Overall, AI management policies are trending
towards being quite permissive to maximize
the development potential of this technology,
albeit alongside inherent risks. Consequently,
numerous governments have enacted or are in
the process of enacting regulations to mitigate
the potential adverse impacts AI may have on
society and the economy.
This paper employs collected information of reg-
ulations publicized by authorities in the United
States of America, the Europe and China, from
which the paper presents both the commonalities
and differences in the regulatory frameworks for
AI management in these countries. By referenc-
ing the diverse experiences of these leading
countries in AI application and development,
this study offers recommendations for other na-
tions in establishing regulatory frameworks for
managing artificial intelligence in the financial
sector specifically in part 3 and with addressing
Where:
Understand: When confronted with a new and
rapidly developing technology, governments must
first strive to understand it. Many utilize collaborative
mechanisms such as coordinating agencies or advisors
to gather a diverse range of expertise aimed at aiding
the comprehension and prediction of AI’s potential
impacts.
Grow: With a clearer understanding of what AI is
and its potential impacts, most countries will develop
national strategies to implement funding, educational
programs, and other designed tools to help promote the
development of the AI industry.
Shape: As the AI industry continues to expand
and evolve, governments will seek to shape its
development and application through the use of tools
such as standards or voluntary regulations.
Source: Deloitte (2023)
Figure 3. The similarity in AI management tools of national governments
Developing an ethical framework for artificial intelligence management in the financial sector
42
Journal of Economic and Banking Studies- No.8, Vol.4 (2), December 2024
the creation and dissemination of misleading in-
formation generated by AI. The most significant
legal document currently in effect is Executive
Order (E.O.) 14110 on the Safe, Secure, and
Trustworthy Development and Use of Artificial
Intelligence, issued in October 2023. Based on
the White House’s Detailed Plan on AI Bill of
Rights and the AI Risk Management Framework
by the National Institute of Standards and Tech-
nology, this E.O. aims for responsible deploy-
ment and use of AI with seven core objectives
considered as the ethical framework of this
legislation, including:
Minimizing risks from increased adoption of
AI in industry and personal use;
Attracting and uniting AI talent by address-
ing concerns of small business founders and
protecting intellectual property rights;
Protecting workers whose livelihoods may
be significantly affected by AI;
Safeguarding citizens’ rights through en-
hanced oversight and minimizing AI biases;
Strengthening consumer protection;
Enhancing protection of consumer data,
personal information, and privacy;
Improving federal government use of AI
with greater supervision and safeguards;
Upholding U.S. leadership in global AI gov-
ernance, ideally in cooperation with interna-
tional partners.
Secondly, at the state and city levels, Califor-
nia, Colorado, Connecticut, Texas, New York,
and Illinois are all making legislative strides to
monitor AI’s drawbacks. Among these, Colo-
rado has advanced the furthest by enacting
regulations governing predictive modeling and
algorithms, while New York City has intro-
duced Local Law 144, regulating automated
employment decisions facilitated by AI.
2.2. AI regulations in Europe
In Europe, the most significant legal regula-
tion concerning AI governance is the European
Union’s Artificial Intelligence Act, adopted on
March 13, 2024. It applies to all 27 member
states and represents the world’s first legal
framework specifically dedicated to the regula-
tion of this technological field. Similar to the
United States, the primary goal of the EU’s AI
Act is to require developers creating or work-
ing with high-risk AI applications to test their
systems, document their use, and minimize risk
by implementing appropriate safety measures.
The EU’s AI Act is expected to be enacted by
the end of 2024 or early 2025 and will apply to
all 27 member countries. Additionally, the EU
is also exploring another piece of legislation,
known as the AI Liability Directive, aimed at
ensuring that individuals harmed by AI can
receive financial compensation.
2.3. AI regulations in China
Until now, AI management in China has not
had a unified document; however, China’s
approach to AI regulation is reaching a turn-
ing point. Previously, instead of managing AI
as a whole, the country introduced individual
regulations when a new AI product became
prominent and had a significant impact on the
public, with principles and regulations issued
in the years 2017, 2019, 2021, and 2022 (see
Table 1). However, since 2023, Chinese policy-
makers have begun the process of drafting
comprehensive national AI legislation. In June
2023, the State Council of China announced
the enactment of the Artificial Intelligence
Law- similar to the EU’s Law- as part of the
country’s legislative agenda. Moreover, the Cy-
berspace Administration of China is currently
soliciting public opinions on the Measures for
the Management of Generative Artificial Intel-
ligence Services, aimed at regulating a range of
services provided by AI to the public.
The comparison (see Table 2) reveals signifi-
cant differences in governance perspectives
and the development of ethical frameworks in
AI management across countries. Generally,
a key distinction between the AI management
systems of these three regions lies in the regu-
latory scope. In the United States, the regula-
tory scope varies across different economic
sectors. This reflects a fragmented approach
where each sector has its own guidelines and
policies for managing AI. This model pro-