Written by Rajlakshmi Chakravarti
Introduction
On December 11, 2024, Justice KV Viswanathan stated, “Artificial Intelligence (“AI”) need not be outrightly rejected, although a final call on how far it should be used needs some consideration”. The securities market has, more than willingly, welcomed the use of AI and Machine Learning (“ML”) tools for their many benefits, such as increased efficiency and accurate outputs. But at the same time, this has become quite concerning because these tools also come with risks, such as the volatile nature of the outputs. Such benefits and risks will be discussed further. Therefore, to shield the market and the stakeholders from the misuse and the risks of these tools, the Securities and Exchange Board of India (“SEBI”) released a consultation paper on November 13, 2024, which deals with ‘Proposed amendments with respect to assigning responsibility for the use of Artificial Intelligence Tools by Market Infrastructure Institutions, Registered Intermediaries and other persons regulated by SEBI (“Draft Amendments”)’. As Vanessa Agarwal, a SEBI lawyer, further states, “The future of AI is uncertain, but the need for thoughtful regulation is undeniable”. However, despite this being true, this regulatory framework can introduce further problems if followed blindly.
Analysis
(a) Mechanization: The Pros and the Cons
The use of AI and ML tools is significantly beneficial in the securities market, where such tools assist in analysing market data and executing trades at lightning-fast speed with better accuracy. Depending on this output, the regulated entities (“Res”) assist the investors in making decisions. Even the stakeholders form their decisions, select stocks, conduct market trading and prepare their investment strategies, depending on this output. This also leads to increased efficiency in operational and compliance functions, which further makes the market more efficient and less risky.
But SEBI, in its aforementioned consultation paper, has also recognised that increased use of these tools comes with increased risks for the stakeholders. For example, the output of these tools depends on the user inputs and the data sets provided, where the slightest change in the input can change the output drastically. Thus, the outputs are volatile in nature. Furthermore, “black box” systems, which are a more advanced form of AI, are not transparent. These systems provide no clear explanation for their accurate decisions. Such a lack of transparency further leads to a lack of accountability.
(b) SEBI’s Proposed Amendments: A Much-Needed Change or in Need of Change?
As a result of the aforementioned downside to the use of such tools, SEBI has proposed amendments in (i) the Securities and Exchange Board of India (Intermediaries) Regulations, 2008, (ii) the Securities Contracts (Regulation) (Stock Exchanges and Clearing Corporations) Regulations, 2018 and (iii) the Securities and Exchange Board of India (Depositories and Participants) Regulations, 2018. SEBI, through these amendments, has proposed to assign sole responsibility to Market Infrastructure Institutions, Registered Intermediaries and other persons regulated by SEBI who use such AI and ML tools, irrespective of the scale and scenario of adoption of such tools for conducting its business and servicing its investors, (a) for the privacy, security and integrity of investors’ and stakeholders’ data including data maintained by it in a fiduciary capacity throughout the processes involved; (b) if the output arising from the usage of such tools and techniques are relied upon or dealt with; and (c) for the compliance with applicable laws in force. Thus, these proposed amendments aim to establish a strict regulatory framework in the securities market for the welfare of the market and the stakeholders. This framework will further be an oversight over the use of such tools and is expected to keep the attached risks at bay.
Now, SEBI’s draft amendments predominantly focus on establishing comprehensive accountability for REs, which does not compromise investor interests or market integrity. However, several aspects of these amendments denote asymmetry in the regulatory framework, where these, along with their benefits, are also flawed. For example, “irrespective of the scale and scenario of adoption of such tools for conducting its business and servicing its investors” ignores that the scale and scenario do matter in making a case more or less risky. Instead, it indicates that rather than the regulatory framework being a risk-based approach, it is a one-size-fits-all approach. It, therefore, lacks proportionality as it would lead to excessive regulation of even less risky cases. As this imbalance puts REs at increased risk of being violators and being penalized for any and every kind of case, it would consequently discourage the adoption of these tools in the securities market. The focus on assigning “sole responsibility” indicates that SEBI has ignored the substantial gap between responsibility and accountability and that AI is a value chain. REs should face sole accountability for the outcome and shared responsibility, whereas, for the part of the value chain that goes wrong, only the party involved is held responsible. This is because third parties, like developers, can also be involved in different stages of this value chain.
For the same reason, “throughout the process involved” is problematic. AI, as a value chain, is made up of several processes and assigning sole responsibility to the intermediaries for the whole chain is unfair. Another flaw is in the aforementioned phrase, “if the output arising from the usage of such tools and techniques is relied upon or dealt with”. Firstly, it needs clarification because it does not specify if the term “output” here refers to the output relied on/dealt with by the intermediary itself or by the stakeholders. Secondly, if it means the latter, then it is not justifiable. Where the technology provides the correct output, any further decisions by the stakeholders, which solely involve the workings of their own minds, must not be the responsibility of the intermediaries but of the stakeholders themselves. Furthermore, SEBI has defined AI as being inclusive of any application or software program or executable system or a combination thereof, offered by the person regulated by the Board, to investors/stakeholders or used internally by it to facilitate investing and trading or to disseminate investment strategies and advice, carry out its activities including compliance requirements and the same is portrayed as part of the public product offering or under usage for compliance or management or other business purposes. But this definition is excessively broad in one sense and narrow in another. It seems to be too broad as it is too inclusive. Although it can be argued that it has been kept intentionally broad to accommodate future technological developments while maintaining regulatory effectiveness, such excessive inclusivity can make the REs excessively cautious in using such tools, as they would not want to be scrutinized under such a stringent regulatory framework. This could further potentially slow down the adoption of such tools in the securities market. Thus, to avoid this, the Organisation of Economic Cooperation and Development definition can be used instead, which is seemingly proportionate and states, “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” But coming to SEBI’s definition being narrow, it is because defining AI can be restrictive, especially when such tools keep changing with the modern world at a fast pace, and a new AI tool is being introduced at every turn. Thus, these tools can instead possibly escape the regulatory framework through this loophole rather than being scrutinized under it. Therefore, due to the aforementioned flaws, these amendments, although meant to bring change in the form of welfare and security of the mechanized securities market and the stakeholders, are in need of change themselves, which must not be overlooked by the SEBI and everyone else who will be impacted by these amendments.
Conclusion
While the introduction of AI and ML tools in the securities market and the market’s consequent mechanization have their own benefits, they also come with their risks. SEBI, while keeping the benefits in mind, has not ignored the risks. Thus, SEBI, in the aforementioned consultation paper, has proposed amendments that would put in place a stringent regulatory framework, keep an oversight on the use of such tools, and prevent risks. While this means SEBI has prioritized the securities markets and stakeholders’ welfare, such amendments are themselves not free of flaws. These amendments instead indicate an asymmetry in the regulatory framework, as aforementioned. Thus, although the regulation of the mechanization of the securities market should be welcomed in the form of these amendments, it should not be done blindly. Instead, the aforementioned flaws causing the asymmetry must be corrected before implementation of these amendments so as to make these amendments proportionate, justified and fair to everyone. This would further not hamper the use of the benefits of such tools in the securities market while keeping a check on their risks.

Leave a comment