Financial regulatory policies need to dynamically balance the relationship between efficiency and stability. At present, it is particularly necessary to coordinate the contradiction between improving support for the real economy and preventing systemic risks. If used properly, artificial intelligence can help improve the quality of financial services and enhance risk management at the same time.
Financial regulatory policies need to dynamically balance the relationship between efficiency and stability. At present, it is particularly necessary to coordinate the contradiction between improving support for the real economy and preventing systemic risks. If used properly, artificial intelligence can help improve the quality of financial services and enhance risk management at the same time.
The application of artificial intelligence in the financial field may achieve the effect of "three increases and three decreases": expanding the scale of services, improving efficiency, and improving experience; reducing costs, reducing contact, and controlling risks. At present, the application results in the fields of payment and credit are remarkable, while the results in fields such as smart investment advisors are still not ideal. Big data and artificial intelligence have a profound impact on financial risk mechanisms, such as breaking the traditional "financial accelerator" mechanism, which has important implications for financial stability and regulatory policies.

In response to challenges such as algorithm black boxes, data privacy, and risk concentration, future financial supervision can consider adding a technical regulatory dimension, establishing an algorithm audit system, applying the concept of regulatory sandboxes, and cooperating with innovative entities to jointly prevent systemic risks.
Regarding the regulatory needs of artificial intelligence, different countries and regions usually choose to make a more favorable choice between "promoting development" and "controlling risks" according to their current development stage.
- European Union
The European Union is a pioneer in regulating the research and development and application of artificial intelligence through legislation. As early as April 2021, the EU first proposed AI legislation. In 2023, the Artificial Intelligence Act was voted through by the plenary session of the European Parliament. The industry believes that the clarity of AI regulations will have a certain impact on the financial industry, especially the AI Act, which introduces the concept of graded supervision of AI, divides the risks of applying AI into four levels: unacceptable risk, high risk, limited risk, and low risk, and clarifies the management focus of the construction and application of high-risk AI. While providing guidance for the financial industry to use high-risk AI, it also paves the way for the financial industry to explore low-risk AI scenarios more openly. As the first formal law in the field of AI, the AI Act provides an important reference for countries around the world to carry out AI supervision.
- United Kingdom
Unlike the EU's unified AI-specific bill, the British government emphasizes that the research and development and application of AI do not exceed the provisions of the current laws, so there is no need to legislate separately for AI. In terms of financial industry regulation, the Bank of England explained the applicability of existing laws and regulations in artificial intelligence regulation from the perspectives of customer protection, competitiveness, and model risk in its research report "DP5/22-Artificial Intelligence and Machine Learning", and promoted exchanges and discussions between governments and industries around artificial intelligence through forums and other means, and jointly provided suggestions for the financial industry to explore the research and development and application of artificial intelligence.
- United States
In 2021, the United States issued the National Artificial Intelligence Initiative Act, emphasizing the need to maintain the United States' leading position in artificial intelligence research. At the same time, the Office of Artificial Intelligence Program was established to promote and coordinate cooperation between various federal government departments and industry, academia, research, and local governments in artificial intelligence. The United States has expressed a similar attitude to the United Kingdom in terms of artificial intelligence legislation, intending to avoid mandatory laws that hinder the development of artificial intelligence. At the same time, the federal government has provided relatively clear guidance for artificial intelligence regulation by issuing a series of non-mandatory guidelines. In terms of financial regulation, the Federal Reserve, the Consumer Financial Protection Bureau and other five federal financial regulators jointly issued a notice in March 2021 to collect information on the use of artificial intelligence by financial institutions, and reminded financial institutions to pay special attention to the risks of artificial intelligence in interpretability, overfitting, network security, suppliers, fair lending and other aspects.
Main risks of concern to financial regulators
Although different countries and regions have disputes over whether to legislate separately for artificial intelligence, regulators have relatively consistent judgments on the main risks and regulatory priorities of artificial intelligence, which can be roughly summarized into six aspects.
- Stable and reliable
The financial industry generally pays more attention to stable operations than general industries. In November 2023, the Monetary Authority of Singapore punished the country's large banks for insufficient business resilience management. Artificial intelligence, like other information systems, faces challenges in service continuity and information security, and is also threatened by special network attack methods such as data poisoning. The Federal Reserve also specifically pointed out in 2021 that artificial intelligence can continuously learn and iterate based on changes in basic data, which brings difficulties to the verification, supervision and recording of artificial intelligence. It can be seen that how to ensure the stable operation of artificial intelligence and ensure the stability and reliability of output results is one of the current focuses of regulatory agencies.

- Privacy protection
The financial industry is considered to be the second largest industry after the medical industry that holds sensitive data. Data such as identity information and transaction flows have both high commercial analysis value and high sensitivity. With the increasing awareness of privacy protection among people in various countries and the release of privacy and data security requirements such as the EU General Data Protection Regulation, regulators require financial institutions to meet the public's demand for privacy protection while building and using artificial intelligence. For example, the Office of the Superintendent of Financial Institutions of Canada specifically emphasized in its report "A Canadian Perspective on Responsible AI" that attention should be paid to public concerns about privacy protection in the application of artificial intelligence in the financial industry.
- Fair ethics
As an important part of the development of modern society, the financial industry, in addition to commercial behavior, also bears social attributes such as maintaining social fairness, protecting the legitimate interests of vulnerable groups, and maximizing the benefits of financial resource allocation. With the application of artificial intelligence in financial fields such as credit, regulators are concerned about whether financial institutions use artificial intelligence to deliberately discriminate against customers, and whether new technologies will affect the access of low-tech people such as the elderly to financial services. For example, the Dutch Central Bank included fairness and ethics in the six basic principles in the General Principles for the Use of Artificial Intelligence in the Financial Sector. The United States proposed in the Blueprint for an AI Bill of Right that users should be able to choose to use manual services instead of artificial intelligence services without harming the public interest.
- Openness and transparency
The industry believes that mysterious and unpredictable artificial intelligence models will increase financial risks. Gary Gensler, chairman of the U.S. Securities and Exchange Commission, once said that financial institutions using similar models to make decisions may lead to herd behavior, which will lead to the next financial crisis. Federal Reserve officials also pointed out that not understanding the logic of artificial intelligence may make it difficult for financial institutions to predict the direction of the system and the hidden risks. Therefore, regulators often require artificial intelligence to have a certain degree of explainability and auditability, including that financial institutions should understand the decision-making logic of their use of artificial intelligence, and when regulators or other third-party institutions need to check the decisions of artificial intelligence, financial institutions can provide the required information. For example, the Dutch Central Bank requires financial institutions to be transparent in the "Basic Principles for the Use of Artificial Intelligence in the Financial Industry" in the decision-making logic of using artificial intelligence, and the decision-making and model output structure of artificial intelligence should be traceable and explainable. In addition, the EU is also trying to further divide artificial intelligence into different types such as interaction with natural persons, emotion recognition or biometric classification, image or video generation, and establish corresponding open and transparent standards for specific types.
- Definition of infringement
Artificial intelligence itself does not have the qualifications of a civil subject, and when artificial intelligence causes infringement liability, how to define the responsibilities of the relevant parties is more complicated than before. For example, when an outsourced artificial intelligence system makes an erroneous judgment and causes huge investment losses to financial institutions, there is no clear basis to determine the division of responsibilities between financial institutions and artificial intelligence system research and development institutions. The EU's "Artificial Intelligence Act" has made preliminary clarifications on the obligations of suppliers, importers, distributors, and users of high-risk artificial intelligence, which can be regarded as a legislative attempt in related fields. In addition, the copyright ownership and infringement liability of the basic data used for artificial intelligence learning and the data automatically generated by artificial intelligence are also one of the hot topics discussed by all parties.