Philosopher specialising in Business Ethics - ESSCA

An open letter, signed by Elon Musk among others, called for a “pause” in the development of programmes more powerful than ChatGPT-4 (1). It also called for meaningful intervention by public authorities to prevent general artificial intelligence from causing harm to humanity, from leading to the “loss of control of our civilisation.” One of the key arguments in this open letter is that the leaders of high tech companies are not democratically elected. This argument, which we will call below the “democratic argument,” is ambiguous because it refers to two divergent conceptions of CSR: one economic and instrumental, the other political and collaborative.

Illustration par Margaux Anquetil

To assert that the leaders of high-tech companies are not democratically elected and, as such, are not legitimate to make the political decisions required by technologies that can harm humanity (2), seems to violate the principle of parsimony, which calls for “expressing the facts as perfectly as possible with the least expenditure of thought” (3). Indeed, it was enough for the open letter to state that the political authorities must legislate and act to “take control.” Mentioning the fact that “tech leaders” are not elected was (apparently) unnecessary.

The democratic argument is unlikely to be a rhetorical device to impress the public – especially the American public – with the importance of the danger posed by general artificial intelligence. Other, more direct and “impressive’ means are available, and the open letter uses some of them.

We can give an example, which the democratic argument concludes:

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

The idea of “delegation” is close to that of “representation,” which is used in the traditional conception of the firm, according to which its managers represent their shareholders. Indeed, as the philosopher Elizabeth Wolgast points out, drawing on Thomas Hobbes, “corporations represent individual stockholders and, as artificial persons [who speak and act in the name of others, can commit and obligate them], act in stockholders’ names and ‘personate’ them” (4).

This idea refers to a classical conception of CSR, an economic or “instrumental” conception in the sense that CSR serves the sole purpose of the company: to maximise its profits and the wealth of its shareholders.

Milton Friedman is well known for having defended such a conception, and he too used the democratic argument (5). According to Friedman, the leader of a private company who undertakes CSR actions to promote the general interest, to the detriment of his profit-maximising objective, would behave like a public agent. The money he or she spends in this way – shareholders’ money – would have the nature of a tax. However, only democratically elected public officials can decide to levy taxes:

“The whole justification for permitting the corporate executive to be selected by the stockholders is that the executive is an agent serving the interests of his principal. This justification disappears when the corporate executive imposes taxes and spends the proceeds for "social" purposes. He becomes in effect a public employee, a civil servant, even though he remains in name an employee of a private enterprise. [If such executives] are to be civil servants, then they must be elected through a political process. If they are to impose taxes and make expenditures to foster "social" objectives, then political machinery must be set up to make the assessment of taxes and to determine through a political process the objectives to be served.” (6)

Friedman’s position is that the firm is separate from society, which Wolgast expresses by noting that “managing a shareholder’s money entails fiduciary responsibilities, those of a financial shepherd, which insulate managers from their responsibilities to the larger society” (7).  Wolgast sees this as the effect of a professional responsibility limited to the duties associated with the role of a company leader, while Andreas Georg Scherer and Guido Palazzo observe that it reflects “a clear separation of business and politics” (8).

By using the democratic argument, however, the open letter on AI does not refer to Friedman’s position, although one can recognise in it the separation between the business domain (or, in our case, that of private research on AI) and the public domain (an obvious responsibility of which is to ensure the survival of humanity…).

The second part of the open letter refers more clearly to a political conception of CSR. It states that

“AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; […] liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”

The idea of private entities collaborating with public authorities is defended by the proponents of political CSR, including Scherer and Palazzo. The latter authors propose a participatory and collaborative democratic model, which they describe as a “deliberative theory of democracy,” a model that they believe is suitable for globalisation. In particular, it avoids the problem of the democratic argument used in the open letter and in Friedman’s instrumental theory of CSR:

“[…] National governments are partly losing their regulatory influence over globally stretched corporations while some of those corporations, under the pressure of civil society, start to regulate themselves. In other words, those who are democratically elected (governments) to regulate, have less power to do so, while those who start to get engaged in self-regulation (private corporations) have no democratic mandate for this engagement and cannot be held accountable by a civic polity. In democratic countries political authorities are elected periodically and are subjected to parliamentary control. By contrast, corporate managers are neither elected by the public, nor are their political interventions in global public policy sufficiently controlled by democratic institutions and procedures.”

Globalisation thus leads large companies to perform functions that were traditionally the responsibility of the state (“traditionally” referring to a Westphalian world order, characterised by sovereign states and rather homogeneous national cultures). Thus, “public issues that once were covered by nation-state governance now fall under the discretion and responsibility of corporate managers.” Scherer and Palazzo theorise this development by proposing a political conception of CSR “that goes beyond mere compliance with legal standards and conformity with moral rules.”

This conception is based on cooperation between civil society organisations, private companies and states. It is a way of putting citizens’ values into practice and responding to their needs. As a result, “corporations thereby become politicized in two ways: they operate with an enlarged understanding of responsibility; and help to solve political problems in cooperation with state actors and civil society actors.”

Although Scherer and Palazzo situate their discussion in the context of globalisation, their conception of political CSR is appropriate to the proposals made in the open letter. But the letter makes no reference to this form of CSR or, for that matter, to CSR at all, perhaps because CSR applies to companies, whereas the entities working on general artificial intelligence are presented as “research laboratories” – which are, however, linked, and the open letter does not specify this, to private companies...



(1) « Pause giant AI experiments: An open letter », The Future of Life Institute, 22 March 2023.

(2) Indeed, the open letter begins with the observation that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research.”

(3) E. Mach, Die ökinomische Natur der physikalischen Forschung, quoted in Vocabulaire Philosophique Lalande, 18th edition, Paris, PUF, 1996.

(4) E. Wolgast, Ethics of an artificial person, Stanford, Stanford Series in Philosophy, 1992.

(5) M. Friedman, “The social responsibility of business is to increase its profits,” The New York Times Magazine, 13 September 1970. See (in French) A. Anquetil, Qu’est-ce que l’éthique des affaires ? Paris, Vrin, Chemins Philosophiques, 2008.

(6) Ibid.

(7) E. Wolgast, op. cit.

(8) A. G. Scherer & G. Palazzo, « The new political role of business in a globalized world: A review of a new perspective on CSR and its implications for the firm, governance, and democracy », Journal of Management Studies, 48(4), 2011, pp. 899-931.





To cite this article: Alain Anquetil, “Two conceptions of CSR in the Open Letter calling for a pause in AI research,” Philosophy & Business Ethics Blog, 12 May 2023.


Share this post:
Share with FacebookShare with LinkedInShare with TwitterSend to a friendCopy to clipboard