ALGORITHMS, INTELLECTUAL PROPERTY RIGHTS AND WHISTLE BLOWING: THE SAGA CONTINUES

Black Box”, when ever someone reads this term, s/he thinks of Airplanes! But, in the world of computers and algorithms it takes a completely different meaning. As per Frank Pasquale[1]  black-box is a metaphor that involves the recording of data in devices like that of an Airplane and systems whose way of functioning is mysterious.[2] This can be a super computing machine or a collection of computers kept together that algorithms to make decisions including ours.

As per Pasquale these “black boxes” are kept closed and there are 2 broad strategies for the same[3] 1.) Real secrecy. This puts a barrier between the contents kept in the box and unauthorized access to the same. 2.) Legal secrecy. This puts an obligation upon the person who has access to the contents of box to keep it secret through both legislation and contracts. No one from the general public knows exactly what kind of information gets fed into these boxes, what kind of algorithm it runs and how does it reach upon a decision. Transparency is non-existent when it comes to algorithmic decision making.  The companies take advantage of the intellectual property laws that are in place for not disclosing the functioning of these algorithms. They claim it is their Intellectual Property, the information is a proprietary information or any other form of protection that might be available.

However, mere forcing transparency would not solve the problem, the companies having these boxes controlled through algorithms must explain how does their algorithm works and how reliable it is.[4] The quintessential question remains why would they? They have the protection of law under and there is obligation whatsoever to disclose the same. Another important question that remains unanswered is that how will we prevent the “tragedy of commons[5] if all these algorithms are made public.

One peculiar to thing to note about this increased reliance on algorithms is that it takes away individual’s freedom of thought. Imagine a situation where an individual has approached a bank for getting a loan of USD 100,000. He has good business plan and few collaterals to offer in exchange for the loan amount. The bank manager to whom this business plan is disclosed from his past experience is of the view that such ideas succeed and loan can be offered. However, the manager put the information of this individual through their algorithms which calculates credit scores and finds out that the person asking for loan has a poor credit score due to unknown reasons and the loan is declined.

What happened here? More reliance was put on the algorithmic decision than human judgment. Whether this is correct or not? It’s very hard to establish either way in the absence of many variables. The only thing that gets established in this scenario is the thought of the manager that the business idea is worth its salt gets thrown in the bin just because the credit score of the individual is not up to the mark, over time the manager will stop making assessments on his own and will depend completely on the algorithm and so will others.

No one is questioning the decisions taken by these algorithms and quality and/or relevancy of information stored in the black box, nor there is an opportunity for a fair representation because all this is happening behind closed doors which is really and legally protected as per the current regulatory framework.

The key technology used for aforementioned decision(s), machine learning, takes existing/past  data as a starting point/input to come up with a computer-model that can be used for decision making. The existing biases (human) might creep in at different and multiple stages  like – i.) framing of the problem which needs to be tackled; ii.) selection and preparation of input data; iii.) interpretation of the outputs, etc. this can happen either intentionally or unintentionally, thereby making the algorithms biased that will be used for decision making.[6]

Another Example, Amazon turned off the online system which it was using for screening job applicants when it was found out that its system regularly deprioritized women applicants since existing/past data suggested that more men are hired irrespective of the number of applications.[7] 

Such situations can be even more dangerous in the Indian context where large number of biases in society exists this is often reflected in both private and public policies and the corresponding decisions. Considering the lack of religious, ethnic, gender, and sexual diversity in positions that are either responsible for or directly influencing the design and implementation of the models and the associated deployments when applied the biases can prove to be dangerous. 

In order to counter the opacity, arbitrariness and warranted bias Pasquale puts for the argument that the Companies which have these black boxes must open them and make disclosures about the methods, quantity and type of data that is being collected and how the algorithms are designed.[8] According to Pasquale and other scholars like Brent Daniel Mittelstadt,[9] Paul B. de Laat[10] etc. disclosure about how the algorithms functions is absolutely necessary and in to get transparency the opening up of these black boxes is necessary. When these black boxes would be opened one will get sense how the internal decision making takes place.

As per Pasquale full transparency to general public would be a nightmare for privacy and would lead to voyeurism and intellectual property theft. According to him there is a need for “qualified transparency” [11] whereby revelation about the decision is limited to a small group of individuals  in order to respect all the interests involved in a given piece of information. The revelation should not be stalled for decades thereby making the exercise futile. However, this “qualified transparency” model would not work in all situations. He gives the example of data brokers[12] and argues that the black boxes owned by data brokers must be opened to regulators fully for the reason that individuals should have the right to inspect, correct, and dispute inaccurate data as it directly affects them.

For Pasquale the key question which tries to answer is how one can adjust old set of laws to new technology. He intends to revive something known as the “political economy”, he is trying to bring back the intricate balance between regulation, corporation, law and policy making. According to him the balance has shifted completely in the favour of corporations due to technology.[13] He is of the view that political economy can revived through regulation and active involvement of government agencies. The secret world of black box has become too big and unstable and it is the duty of the State to take control of the situation and restore the trust.[14]

All of Pasquale’s worries became true! Recently, Ms. Francis Haugen, a former employee of Facebook chose to become the whistleblower against what is going on in the company. During September, 2021, the Federal law enforcement agency of USA received multiple anonymous complaints against Facebook claiming that Facebook amplifies misinformation, hate, political unrest and how Instagram (Facebook’s platform) is harmful for teenage girls around the world. On October 4, 2021, Ms. Francis came before the world and revealed her identity. She appeared in an interview on CBS’[15] “60 Minutes” whereby she made some of the most shocking revelations of the year 2021!

During the interview Francis gave the insight about how algorithms developed by Facebook work. She said “The algorithms picks from those options based on the kind of content you’ve engaged with the most in the past”. Unfortunately, this content is mostly hateful, divisive and polarizing. This is what Facebook’s own research shows. She further said “it’s easier to inspire people to anger than it is to other emotions”.[16] According to Francis, Facebook can tweak its algorithms but people will spend less time on the site as a consequence. If people spend less time it will mean less money for the Company. Facebook makes more money when the people consume more content, spend more time on the site and click on more adds.

Francis also testified before the US Congress on October 5, 2021, putting forth the same points as that in the interview. The biggest take away from the interview and testimony is that “over and over again there is conflict of interest between what is good for public and what is good for Facebook and more often than not Facebook is chooses what is good for the Company”. Francis requested the law makers to step in and regulate Facebook as it is urgently required, because the management of the Company is not going to choose public good over its own profit. The law makers after the testimony assured Francis and the people that they will definitely come with something to stop this menace.

The question that emerges after the Francis “episode” is what can be done to stop these Companies and algorithms?

 Few Scholars have tried to tackle this question. Paul B. de Laat argues for a model based on “accountability”.[17] According to him we shouldn’t be concerned about how the algorithms are designed or how calculation is done or the decision is arrived at, the corporate responsible for developing the algorithm must held accountable when something goes wrong. His arguments are similar to that of Pasquale as he puts forth the similar point that complete transparency is counter-productive is not necessary, he asserts that for holding the corporates accountable and transparent political pressure is necessary.[18]

Brent Daniel Mittelstadt along with other scholars argue for making the algorithms “ethical” by removing human biases.[19] They target the algorithms itself and put forth the argument that it is inevitable algorithmic decision making is here to stay and with the advancement in technology our reliance on such decision will only increase therefore focus must shifted in making the algorithm as ethical as possible.

Looking closely at the arguments put forth by Paul B. de Laat and Brent Daniel it can be seen their arguments can be seen as subset of the argument put by Pasquale. If we dissect Pasquale’s argument of opening the black boxes, it will eventually lead to holding the makers of the algorithms accountable thereby nudging them to make the algorithm ethical. In the broader sense the 3 arguments are related.

Sandra Wachter offers a completely different approach to obtain the explanation for decision making. She along with Brent Mittelstad and Chris Russell argue that there is no need to open the black boxes, the explanation for a decision must be given through the use “counterfactuals”.[20] According this approach, unconditional counterfactual explanations should be given for the automated decisions. As looking at explanations will help the data subject in acting as opposed just understanding about what is going on. This method seems to more efficient and would not cause much of a fiction with the existing or upcoming disclosure laws. This approach would help the corporations in maintaining their intellectual property, their competitive advantage and minimize interference by the law enforcement agencies, counterfactual explanation provides data subjects with meaningful explanations to understand a given decision, grounds to contest it, and advice on how the data subject can change his or her behaviour or situation to possibly receive a desired decision (e.g. loan approval) in the future without facing the severely limited applicability imposed by the internal decision making process and the law.[21]

For Pasquale it is important for the subject to understand how the algorithm is functioning and what data sets are used to arrive at the decision so that he/she has the chance to counter the decision. For Sandra it doesn’t matter if the data subject doesn’t understand the algorithm is functioning the only important things is, he/she is given an explanation through counterfactuals so that he/she may counter the decision.

The situation of algorithmic decision making is unprecedented. There are some decisions to be made by the lawmakers but one thing is certain, the law makers have to step in to make the situation equitable for the general public and tech giants. As of now the power balance is skewed heavily in the favour of tech giants like Facebook because no one exactly know how they fuction.

The algorithmic society is a completely different animal! And it’s still considered to be in its nascent stages despite such significant influence it already has. Proper information about the algorithms and its functioning should be the first step towards its optimum use for the betterment of the human society as a whole. In the absence of adequate information, regulation becomes very difficult be it self-regulation, or external regulation. Regulation by State is the need of the hour when it comes to algorithms and the same is unequivocally highlighted by “Ms. Francis the whistleblower”.


[1] Frank Pasquale is a professor of Law at Brooklyn Law School.

[2] Alan Rubel, The Black Box Society: The Secret Algorithms that Control Money and Information, by Frank Pasquale . Cambridge : Harvard University Press , 2015 . 320 pp.

[3] Frank Pasquale, THE BLACK BOX SOCIETY: The Secret Algorithms That Control Money and Information Harvard University Press , 2015, p 6

[4] Ibid, p 8

[5] It is an economic theory according to which if everyone starts using the shared resource as per his whims and fancies, it will lead to ultimate depletion of the resource.

[6]Rakesh Kumar, India needs to bring an algorithm transparency bill to combat bias. Available at https://www.orfonline.org/expert-speak/india-needs-to-bring-an-algorithm-transparency-bill-to-combat-bias-55253/#:~:text=Algorithms%20and%20data%20must%20be,to%20detect%20and%20alleviate%20bias.

[7] Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against women. Available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[8] Supra note 3 at p 142

[9] Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo et all, The ethics of algorithms: Mapping the debate.

[10] Paul B. de Laat Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?

[11] Id

[12] Id at p 145

[13] Frank Pasquale, interview to Maryland Carey Law, Faculty Publication available at https://youtu.be/L11-erUqUKA

[14] Supra note 3 at p 212

[15] CBS is a US based news agency.

[16] Facebook whistleblower Frances Haugen details company’s misleading efforts on 60 Minutes – CBS News Available at https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/

[17] Supra note 8 at p 03

[18] Id  at p 16

[19] Supra note 7 at p 02

[20] Sandra Wachter, Brent Mittelstadt, Chris Russell, Counterfactual Explanations Without Opening The Black Box: Automated Decisions And The GDPR, Harvard Journal of Law & Technology Volume 31, Number 2 Spring 2018

[21] Id at p 4

Mridul Gupta

Author

Mridul Gupta is a young law practitioner having experience in Privacy and Cyber Laws, Intellectual Property, and Energy Laws. Mridul regularly assists clients, including fintech companies, navigate data protection and data privacy laws. Mridul operates across the full range of issues covered by the practice, taking the lead role on key mandates. An alumna of the School of Law, UPES, his interests run beyond law into the policy sphere and social entrepreneurship. Mridul is a prolific writer, whose articles have been published in National and International publications and Journals.

Be the first to comment

Leave a Reply

Your email address will not be published.


*