Skip to main content
Research

Recommender Systems as Techniques of the Self?

Author: Tyler Reigeluth (Centre de théorie politique, Université Libre de Bruxelles)

  • Recommender Systems as Techniques of the Self?

    Research

    Recommender Systems as Techniques of the Self?

    Author:

Abstract

This paper aims to give a renewed perspective on the normative stakes involved in the algorithmic recommendation of cultural content. Two prevalent framings of technological normativity and transparency need to be overcome. First, algorithmic design seems convinced that accessing the behavioral level of interaction is coincidental with a greater level of truth and authenticity, as if the subject were incapable of speaking honestly of itself. Conversely, critics of the 'black-box' normativity imagine that being able to access the code, the written structure of the algorithm, we will unveil something of its essence. By reading Foucault's notion of techniques of the self, as exposed in L'Herméneutique du sujet, together with the cybernetic theory of feed-back and Simondon's philosophy of individuation, the author claims that users do not need to see through the algorithm nor see the actual workings of the algorithm, but that they need to be able to see themselves when using the algorithm.

Keywords: algorithmic normativity, collaborative filtering, recommender systems, techniques of the self, transparency, Foucault, Simondon

How to Cite: Reigeluth, Tyler. "Recommender Systems as Techniques of the Self?" Le foucaldien 3, no. 1 (2017): 1–25. DOI: https://doi.org/10.16995/lefou.29 [Note: In 2022, Le foucaldien relaunched as Genealogy+Critique.]

3300 Views

794 Downloads

6 Citations

Published on
2017-09-01

Peer Reviewed

1. In Search of Algorithmic Normativity

There have always been too many books to read, too many paintings to admire, too many movies to watch. And there has never been enough time to read all of the books we would like to or to listen to all of the music we want to. Collective practices of transmission embedded in educational and media institutions have, to a certain extent, guaranteed access to quantities of cultural content unattainable at a purely individual level. Certain books are read because they are part of the school curriculum, others because we trust the counsel of our librarian or because we reproduce our parents' cultural capital. Certain movies are seen because they are our best friend's favorites, others simply because they are considered "classics". Such institutions and relationships function simultaneously as vectors of propagation for practices and as functions that map the vector to a particular situation.1 All of our cultural practices are to some degree operations of mimesis and selection. The selection need not be conscious, it is already woven into our social relations; and the mimesis need not be intentional, it is the very condition for the social transmission of behaviors. Selection and mimesis hold each other in mutual suspicion, but also seek each other out: we imitate what we select and we select what we imitate. Thus, they are by no means mutually exclusive, but rather mutually constitutive. On the far ends of their dynamic polarization, we find rare but powerful acts of choice (on the selection side) and mimicry (on the mimesis side), neither of which, as we will see, should be considered as proof of a sovereign subject, but rather as the moments in which subjectivity emerges as a problem to be resolved within a collective field of action. Choosing and imitating are the reciprocal experiences of the individual's differentiation from, and alignment with, the collective forces structuring its activity.

Today, it appears we must add "recommender systems" to the long list of socio-technical mechanisms of selection and mimesis. These algorithmic processes work as "filters" or "selectors" which parse and organize our environment and, as such, they can be counted as normative apparatuses. Indeed, it is commonplace to point out the normative function algorithms play in shaping our attention, in forming habits and in "nudging" certain actions. This normative function is typically presented as a "black-box" that needs to be unveiled, rendered transparent and submitted to forms of accountability.2 While this claim is certainly relevant it seems to take aim at the algorithm while simultaneously confusing it with the larger, albeit related, process of data collection and visualization. Likewise, it fails to differentiate or recognize the specificity of contemporary machine-learning algorithms and practices with regard to more classic algorithmic models. As such, this critical approach of algorithmic normativity falls short in being able to qualify and locate its exact normative efficacy.3 The agency of the machine-learning algorithms that recommender systems use to select and imitate cultural content on the Web is closer to an active membrane which adjusts its triggering thresholds, than it is to a passive sieve which is composed of predetermined norms. More and more of what we watch, read or listen to is at least partly directed by some form of automatic recommendation based either on one's own past behaviors, or on the preferences and behaviors similar to our own. For this type of recommendation to occur there must be some kind of persona or "profile" that mimics our behaviors and selects relevant contents. This profile, however, is not the individual, nor is it a pure statistical aggregate. It is both before and beyond the individual. "Before" in that it tracks those behaviors which escape the conscious purview of the individual (what Deleuze would call the "dividual"4); and "beyond" in that it results from the correlation of traces that are not our "own". In other words, there is something in the "profile" which seems, against all odds, to answer the call of "subjectivity". The algorithmic recommendation processes focused on here are primarily concerned with online retailing and cultural content where the customer is expected, in some form or another, to partake in a process used to inform its preferences and desires. As such, recommender systems used in security apparatuses or insurance practices are not addressed.

There is an overabundance of products, services and contents available on the Web which the individual consumer has neither the time nor the expertise to choose. How does one choose when confronted with so many choices? If they want loyal and satisfied customers, retailers know they have to offer them a meaningful selection of products. Sheer quantity may be enough to excite customer purchasing, but it is certainly not enough to maintain interest on a long term basis. The retailers must not only offer more choices in the services and products they provide, but these choices must be expertly targeted at consumer preferences. The recommendations themselves amount to a service provided and establish a circular relationship between the recommendations as they are informed by the very behaviors that they seek to induce. It is the normative dimension of this circularity that we will try to understand; in other words, how this feed-back mechanism produces certain types of norms.

While we could rather cynically consider recommender systems as normalization processes seeking to align customer behaviors on the retailer's profit-making interests they serve, it seems that such an account would be unable to grasp how retailer and customer interests are mediated and played out through the recommendation process' algorithmic agency. We are by no means claiming that recommender systems, in their prevailing state, do not, to put it crudely, by and large serve the capitalist predation of surplus value or the capture of consumer libido and desire. Rather, my contention is that the paradoxical nature of the algorithmic techniques deployed – and the inability to understand their normativity as that of a tool serving a predetermined interest or function – should allow us to entertain the idea of their potential to become techniques of the self. What we need then in approaching the interplay between automated recommendation and subjectivation is a distributed conception of algorithmic normativity, which leaves room for the divergence and heterogeneity of agencies while clearly recognizing the power relations and asymmetries involved between them.

To this end, I will first give a cursory yet essential presentation of the algorithmic techniques used in recommender systems, namely collaborative filtering. I will then engage in what should be understood as a speculative experiment. By using the metaphor of "steering" – present in both Foucault's exposition of his notion of techniques of the self and in early cybernetic endeavors to characterize feedback mechanisms – as a productive device, we will attempt to articulate Foucault and cybernetics to better approach the circularity involved in recommender systems. On one hand, Foucault will give us some reference points for assessing the ethical relationship to oneself produced by techniques of the self; on the other, cybernetics will allow us to consider circularity from a technical perspective which still bears much weight in contemporary algorithmic design of feedback mechanisms. Finally, we would like to point to a possible synthesis of this articulation in light of certain aspects of Gilbert Simondon's philosophy of technique and individuation. My hope is that this speculative foray will be seized as a provocation to reconsider what we expect of algorithmic design and not as programmatic proposal.

2. Recommender Systems and Collaborative Filtering

Without going into too much detail, a few fundamental technical notions and distinctions should, nonetheless, be introduced so that our speculative undertaking has some bearings. Online retailers rely on filtering techniques to supply the most informed and tailored choices to their customers. The ideal recommendation should guide the customers towards items they have not yet purchased or encountered but that they would probably enjoy. The filters retailers use to sort and organize items into meaningful recommendations cannot work solely on the basis of a single customer's past behaviors. They must find a delicate balance between sameness (always the same products) and difference (only different products): the customer must be able to recognize both the novelty of the recommendation and its connection to a larger field of already known products. To do so, "collaborative filtering" techniques are used to analyze "relationships between users and interdependencies among products and to identify new user-item associations."5 Collaborative filtering is usually contrasted with "content filtering" or "feature-based filtering" which constitutes individual profiles for different users and products describing their respective characteristics. These characterizations usually rely on explicit and standardized criteria that are manually completed by both the user and the expert. "Filtering" in this case means matching a product profile with a user profile and differs little with the dying practice of the librarian or the video rental store offering books or videos that match a specific customer's preferences. In a way, content filtering could be seen as an impersonal and automated interaction with a given expert to whom as much information as possible must be given in order for it to make the recommendation that best matches what one is looking for.

Conversely, collaborative filtering techniques proceed by creating profiles that do not completely correspond with an individual user or product, but to their mutual association in relationship to other user-product associations. The profile is the result of a constant "learning" process and should be seen not as an actually existing relationship, but as the projection of a relationship that has yet to be realized. The recommendation then is partly informed by existing relationships and behaviors, and projects this relationship through the behaviors it seeks to induce in the form of a prediction. From the machine-learning point of view that informs these collaborative filtering techniques, the profile is not the model the algorithm executes but is the constantly updated and corrected output of the algorithmic process.

These filtering techniques are one of the building blocks of what are commonly called "recommender systems": automated systems that produce recommendations based on predictive algorithmic models. As Chopra and Balakrishnan write: "CF [collaborative filtering] models form the core of most recommender systems. They work by extrapolating unobserved user-item preferences from preference information collected from the target user, and the preferences of all the other users. Finally, recommendations are made, and the user can be shown the items estimated to be the most preferred by her."6 Similarly, Schroff underlines that, "In collaborative filtering, there is no distinction between 'objects' and 'features', as was required in the case of machine learning using classifiers. Books are objects with the people who buy them as features. Similarly for films or ratings. The features that emerge out of collaborative filtering are hidden, or 'latent', such as the roles people play."7 These filtering techniques have much more in common then with spontaneous forms of socialization in which behaviors derive meaning from the action in which they are involved than they do with situations in which actors come to explicit intersubjective agreements. This is one way of understanding the mix of "implicit" and "explicit feedback" that is specific to collaborative filtering.

One of the most interesting, albeit problematic, aspects of collaborative filtering techniques is the way they couple explicit and implicit feedback to personalize recommendations. "Personalization" requires customer specific information that is typically provided by explicit information and feedback mechanisms (customer satisfaction surveys; the user creating a profile; the user rating, commenting or giving feedback; etc.). This is a very limited and limiting way of (obtaining) feedback (i.e. users do not like being harried giving their opinion; discrete or ordinal evaluation and rating systems do not necessarily translate the user's subjective experience but rather the experience it has of the evaluation process, etc.). Of course there is always a chance that the user could be "gaming" the algorithm if it engages too directly with the feedback process. The generally uncontested principle underlying the design of these types of services is that the user-customer should not be aware of the algorithm's presence because that would distort or corrupt the former's behavior.8 The recommendation produced by the recommender system would be impure or misguided if it were informed by user-customer behaviors in which the function of the algorithmic process were more or less consciously integrated. As Dominique Cardon pointed out in his article "Inside the Mind of PageRank", this antagonism between organic and strategic behaviors as a guiding belief of algorithmic design also lies, suggestively, at the heart of Google's conception of its PageRank algorithm:

While the science of algorithms must pursue its quest for perfection to best reflect Internet users' actions, this must never involve Internet users acting according to Google, nor Google engineers interfering with the rankings. Google wishes to see this world as natural. In parallel, another world is open to advertisers wishing to fight over advertising auctions' keywords. This world is openly fully strategic and instrumental. […] The partitioning of the results page into two worlds, organic and strategic, conveys a vision of the web and Internet users which Google has imposed on the entire ecosystem of the web, through all possible means.9

Algorithmic design is firmly attached to an ideal of transparency, but in the opposite sense to the one advocated by critics of "black-box" normativity. The algorithmic process of recommendation is transparent if the user-customer is able to see right through it when carrying out a given activity. The activity itself should be the only end the user has in mind, any experience of the algorithmic process involved would be disruptive for the user's goal-oriented behaviors. These very behaviors are the sources of information that retailers and advertisers require in making the "best" recommendations. Ensuring a certain transparency of algorithmic design is not only a way of disinhibiting the user-customer's behavior but simultaneously guaranteeing transparency of the market's supply side. It is not surprising then that algorithmic processes and online retailing are seen as sources of "disintermediation" driven by high levels of market transparency in which buyers and sellers seek to circumvent the middlemen so as to meet the "fair price".10 So it would seem that transparent algorithmic processes allow user-customers to behave "naturally" and retailers to behave strategically.

While this trope is not always explicitly explained, it is undoubtedly an analytical schema that corresponds to the algorithmic design ethos of this type of service. In his book Honest Signals, Sandy Pentland, director of the MIT Human Dynamics Lab, presents the results of behavioral research done with "sociometers" – small wearable devices used to measure group activity and performance – and gives an account of "honesty" which would no doubt startle those who spontaneously ascribe such a virtue to the figure of a conscious and responsible subject. He writes:

What are the types of honest signals that humans use? We are familiar with many types of human signals; smiles, frowns, fast cars, and fancy clothes are all signals of who we are (or who we want to be). In fact, this sort of signaling is probably the basis for fashion and 'current culture.' We are conscious of these types of displays and often carefully plan to incorporate them into our communications. And therein lies the problem: because these signals are so frequently planned, we cannot rely on them being honest signals. We need to look for signals that are processed unconsciously, or that are otherwise uncontrollable, before we can count them as honest.11

By invoking the "honesty" of behaviors, the thorny and age-old question of knowing, whether what someone says or does is what they really meant to do or say, is eluded. In this world, authenticity of self-expression is not the prerogative of a sovereign subject but rather the function of the behavioral mimesis and selection. It would seem that there is much more to be learned about customers' preferences and choices by tracking their implicit feedback "signals" which are the "natural" by-products of their activities. The apparent paradox is that while recommendations target a "person" – and everything about the interface design of retailing services is meant to "personalize" the interaction – they do so by tracking subconscious or unconscious behaviors. A corollary to this conception is the assumption that similar individuals like similar things for similar reasons and that the similarity observed in the past will reoccur in the future.12 The added value of collaborative filtering supposedly rests in its ability to effectively approach relevant relationships between users and products, based not on their declared preferences or their stated intentions, but on their past behaviors, insofar as these behaviors have shared aspects with other behaviors. Our contention here is not so much that this behavioral and technically-mediated framing of subjectivity is necessarily flawed or misguided as such, but that it lends a false sense of added objectivity to the observed and invoked relationships that are parsed to make relevant recommendations. The prevailing idea is that a technical mediation contributes best to objectivity when its presence is undetectable, when it is subtracted from a behavior's equation while simultaneously making the behavior detectable, measurable and observable. In other words the technical apparatus is seen as objective precisely because it is not seen as taking part in the process of subjectivation. The question with which we are now faced is: What are these behaviors that are only accessible through technical mediations? If they are neither objective nor subjective, how then might we understand their significance?13

The "honest signal" framing of behavior is strikingly reminiscent of Cold War era research into "group dynamics" by social psychologists such as Robert Bales. In their account of Bale's use of "Interaction Recorders" in his "Special Room" at Harvard University, Erickson, Klein, Daston, et al., underline the shift within the experimental method implied by Bale's approach: "If in prewar studies experimenters had attempted to make their presence less and less obvious, so as to get closer to 'real' conditions, at Bale's laboratory the experimenter was more and more present, for the room was itself a 'real' condition, somewhere between the artificial and ordinary."14 The idea behind the "Special Room" was to place small groups in a room in such way they could interact and behave as "freely" as possible in solving a given problem. Interpersonal group dynamics were observed by scorers who used Interaction Recorders to categorize units of behavior in real-time. "It was Bale's hope that the irrational (i.e., the implicit of unconscious parts of social interaction) could be made rational (i.e. explicit) through this feedback method".15 Behavior was seen as situation-based and locally regulated by feedback processes which maintained variations and anomalies within acceptable margins. The question was no longer abstract "social behaviors" or "norms" based on statistical analysis of large numbers and populations, but of intensively mining localized and experimentally controlled behaviors, out of which norms could then be extrapolated to other situations. In the end, the rationality and objectivity of such processes were not so much about the scientist's tireless reflexivity and trained judgement, as they were about entrusting methodological control to a technical apparatus comprised of rules; "the rules in question are algorithmic, in the sense that they can be executed without discretion or judgment, by 'a clerk or a machine'."16 The last thing expected of rationality was for it to be mindful of itself. Instead, rationality was seen as conforming to observable rules, understood both as the rules followed within a given situation or activity and as the rules witnessed and monitored by an experimenter. In this sense, rationality is essentially situational or, to use Herbert Simon's term, "bounded"17 by the necessities of situations in which actors are required to develop strategies, solve specific problems and make decisions.

The behaviors tracked and correlated by wearable devices like sociometers push this logic of embedding the experimental setup into real-life social interactions to such a degree that the artificiality of the embedding is ever less perceivable or conceivable. Just as certain depths of the earth would be unattainable without state of the art drills, or certain stars would be unobservable without last generation telescopes, so too would expanses of social life remain out of reach if it were not for the algorithmic models capable of mining and extracting unprecedented correlations and behavioral patterns. This new "social physics"18 claims to present a resolution and depth of social life heretofore unattainable. Nevertheless, even though the device is designed to be discreet to the point of being gradually and indiscernibly integrated into the interaction, it is still present as an addition to the environment that can always be subtracted. In some way or another, the apparatus, although it changes the nature of the interactions unfolding, can still be seen as separable from the interaction. This "presence" of the apparatus, as tenuous as it might be, reaches a new vanishing point with the algorithmic design of recommender systems. In this instance the experimental setup is unbound and no longer limited to a "room" or even a "setup" in the conventional and artificial sense of the term. Instead, it takes the form of an ambient and imperceptible – thus ideally insignificant – activity tracking. The recommender system's presence should not differ from the activity it is tracking. This assumes that the algorithmic system needs to be considered as an integral part of the activity of searching for, finding and consuming cultural content on online applications. If we do not limit the behavior to something that is exterior to the device used to track it, the collaborative filtering algorithm itself needs to be seen as behaving mimetically and selectively within a given activity.19

3. Recommender Systems as Techniques of the Self

Could it be that collaborative filtering techniques might bring about a form of subjective experience irreducible to the individual, which is the paradoxical experience of one's self being collectively inhabited? This is the question we need to have in mind when considering the subjectivation potential recommender systems could offer. It is precisely because the presence of the recommender system is as a Google research team emphasizes in their paper "The YouTube Video Recommendation System": "The overall design of the recommendation system is guided by the goals and challenges outlined above: We want recommendations to be reasonably recent and fresh, as well as diverse and relevant to the user's recent actions. In addition, it's important that users understand why a video was recommended to them."20 Again, the recommendation's quality is presented as a balancing act between novelty and sameness, and more importantly as giving the means for the user to recognize the difference between the two. The recommendation acts both as a prediction extrapolated from the user-consumer's behaviors and a feedback signal retroacting on these very behaviors. Tangentially, this is also what is meant when marketing jargon speaks of "retargeting",21 "tailored advertising" or "reactive marketing".22

In order to work through this strange interaction in which the user-consumer's profile operates as a prediction of a future relationship informed by a feedback mechanism, we need to articulate three theoretical perspectives: Foucault's take on subjectivation; Cybernetic's conception of feedback; and Simondon's philosophy of transduction.

In his 1981/82 class at the Collège de France, L'Herméneutique du sujet, Foucault offers a striking account of governmentality as being a field of relations which both produces and presupposes a subjective experience of a self to one's self:

[…] si on prend la question du pouvoir, du pouvoir politique, en la replaçant dans la question plus générale de la gouvernementalité – gouvernementalité entendue comme un champ stratégique de relations de pouvoir, au sens plus large du terme et pas simplement politique –, donc, si on entend par gouvernementalité un champ stratégique de relations de pouvoir, dans ce qu'elles ont de mobile, de transformable, de réversible, je crois que la réflexion sur cette notion de gouvernementalité ne peut pas ne pas passer, théoriquement et pratiquement, par l'élément d'un sujet qui serait défini par le rapport de soi à soi.23

Governmentality implies a particular kind of subject and a particular kind of field of power wherein it is the subject's relationship to itself that acts both as power's productive force and its point of resistance. Foucault sets out, in this particular lesson, to show how the "care of the self" is progressively detached from particular pedagogical or political interpretations of Greek antiquity to evolve at the end of the Pagan era and on the threshold of the early Christian period, into a general moral imperative coextensive to an ethical life. One's life must be turned towards oneself. One must come back to oneself. This implies a genuine movement, a constant effort on the part of the subject to move towards itself. In this light, subjectivity is the experience of displacement; paradoxically it is the feeling of not being completely one's self. "Déplacement et retour – déplacement du sujet vers lui-même et retour de soi sur soi –, ce sont deux éléments qu'il faut essayer de débrouiller."24 By focusing here on a certain historical moment that he locates in late antiquity, Foucault wants to emphasize a dynamic circularity which marks subjectivity as a relationship of caring for, and thus knowing, one's self. In fact, the "self" seems to be the very name he gives this circularity. As Muriel Combes points out in her remarkable crossreading of Foucault and Simondon, La vie inséparée: "Le concept de soi, plus encore que celui de sujet semble-t-il est toujours indissociable d'un rapport." She underlines, "Ce qui ressort des analyses de Foucault concernant les techniques de soi c'est en somme que, avant de nommer le sujet lui-même, 'soi' est le nom d'une potentialité relationelle."25

To illustrate this movement of the self towards itself, Foucault highlights the importance and recurrence of a particular metaphor within Stoic writing: navigation. Several elements come into play in the use of this metaphor. The obvious prerequisite for navigation is that there is a movement from one point to another. This movement requires a direction, teleology: one does not navigate at random. The destination is usually a port of call, a place of mooring. The journey towards the destination, however, is full of risks and dangers threatening to throw one off course. To overcome these dangers and all the unexpected disturbances of life, one needs to possess knowledge, a certain technique or art of steering. To keep the course, one must know how to map the stars, handle the rudder, gage the wind, command the sailors. In life too, one must know how to resist distraction, remember important things, face death, etc., all activities which require certain techniques through which the subject knows how to return to itself. But what is the subject's charted course? What is its particular objective? To become itself. How does one become oneself? By using techniques which give it an objective reality upon which it can intervene. Techniques of the self are not so much ways of exteriorizing or expressing oneself, but rather ways of constituting the unity of the self through different exercises of attention and concentration.

One of the techniques Foucault uses to exemplify this is hypomnema: the material inscriptions one must have ready at hand in case of need. In the Greco-Roman period, this equipment was largely composed of reading notes, thoughts, public records or letters that might serve in a given situation, when faced with a certain difficult circumstance or problem. In such instances, the subject must go outside of itself and return with the appropriate answer or piece of knowledge. At first sight, hypomnemata might just look like ways of unloading certain aspects of our memory onto material surfaces that will preserve the traces of our passage and to which we can return in concrete situations. Underlining when we read, writing notes or keeping a journal, are just some of the most obvious ways in which we count on material techniques to remember for us what we think we will someday need. Hypomnemata are ways of giving ourselves the means to remember things in the future, and to return to ourselves from a different position, in a different state. Who has not happened upon a scrap of paper with cryptic notes, margin scribbles in a book or saved items on Amazon.com, with a feeling of surprise, relief or puzzlement?

More than being a way of stockpiling fragments for the future, hypomnemata are above all means of sharing common experiences and knowledge. The practice recognizes others as being often better equipped at helping us than we are ourselves. It does not suffice to simply rely on matter to preserve what is fleeting for our mind. This would be nothing more than a form of dispersion of the self the Stoics called stultitia. The stulta is the one who lives in a state of agitation and carelessness of the self. Totally open to the outside world, she is constantly thrown off course by external influences and is unable to chart her own way. The stultus is ungovernable precisely because he cannot govern. His will sways and changes with the contingency of events – according to Foucault this is the very meaning of the relationship in Plato's Alcibiade between knowledge of self and care of self as requisite for governing others. The subject alone is not strong enough to counter the pull of stultitia, it must turn on itself. And to do so, it requires another: "Autrui, l'autre, est indispensable dans la pratique de soi, pour que la forme que définit cette pratique atteigne effectivement, et se remplisse effectivement de son objet, c'est-à-dire le soi."26 Hypomnemata then are not merely means of remedying the irresistible effects of forgetfulness, but more importantly they are ways of exercising one's self, of practicing a care for oneself so as to be able to care for others. In writing, (re)reading and copying, the subject experiences its differentiation through incorporation and repetition.

This example of hypomnema as a technique of the self is not meant to be applied as such to the study of recommender systems. Foucault's "tool-box" should be seen as composed of styles and methods, not of ready-made all-purpose tools. What the example emphasizes is that techniques do not stand against or come after subjectivity, but rather that the technical mediations of an act are what allow for an experience of the self to take form. Through techniques a same act can be carried out in very different ways. Techniques are inherently ethical in that they elevate the means to the same rank as the ends. "Techniques of the self" actually implies that the self is a means for a technique and techniques are a means for the self. There are techniques the subject can use (and must use in order to become a subject and be recognized as such within a particular normative regime) to steer itself back to its self. The subject is a steersman whose port of call is its "self". The self is a relationship then not an identity; something which is always being undone and must constantly be redone.

At this point, it would be worth considering the cybernetic conception of "feed-back" insofar as it could to a large extent be articulated with the way Foucault qualifies governmentality as a form of self-government, as a relationship of the self to itself. Everyone probably already has the navigational etymology of cybernetics in mind: Cybernetics comes from the Greek "kubernetes" which means steersman. Its Latin derivative ("gubernator") gave us the word "government" and should always remind us of the impurity of political government: neither body nor machine, a society is perpetually searching for its model of government.27 I will not go into a general presentation of cybernetics, but only wish to point towards a specific problem its research addressed: feedback mechanisms in purposeful and predictive behaviors. Steering a ship can be seen as a feedback system in which certain inputs (wind, speed, direction, etc.) are constantly corrected in view of attaining a certain output (destination). In their famous 1943 article, "Behavior, Purpose and Teleology", Rosenblueth, Wiener and Bigelow establish a stratified classification of behavior aimed at reintroducing "teleology" (that behaviorism had rejected) into the analysis of behavior under the guises of "purpose controlled by feedback". The fundamental idea is that there exist forms of behavior that operate as extrapolation of current states constantly being corrected and adjusted in time: "Predictive behavior may be subdivided into different orders. The cat chasing the mouse is an instance of first-order prediction; the cat merely predicts the path of the mouse. Throwing a stone at a moving target requires a second-order prediction; the paths of the target and of the stone should be foreseen. Examples of predictions of higher order are shooting with a sling or with a bow and arrow."28 While the mechanistic undertone and the nature of the examples used may leave us uncomfortable as we try to approach problems of subjectivation, it nonetheless seems that the same dynamic could be applied: Could the subject not be seen as constantly extrapolating its expectations onto others and revising or adjusting those expectations as others respond? Again, this does not imply a subject that is fully aware or sovereign of itself, but rather a subject as the sign of a higher-level predictive relationship as giving a unity of purpose to a disparate field of influences. This unity is achieved not in substantive terms, but as a prediction of efficacy of action: the subject is the one that experiences its actions as producing effects outside of itself that affect it in return.

Norbert Wiener identified the error or difference between an input and an output not as a failure to attain the output, but as the driving force of dynamic systems. Error is what makes feedback possible. He writes: "when we desire a motion to follow a given pattern, the difference between this pattern and the actually performed motion is used as a new input to cause the part regulated to move in such a way as to bring its motion closer to that given by the pattern."29 The seemingly trivial example Wiener gives of such a feedback mechanism is that of picking up a pencil. "What we will is to pick the pencil up. Once we have determined on this, our motion proceeds in such a way that we may say roughly that the amount by which the pencil is not yet picked up is decreased at each stage. This part of the action is not in full consciousness. To perform an action in such a manner, there must be a report to the nervous system, conscious or unconscious, of the amount by which we have failed to pick up the pencil at each instant."30 What is important to grasp here is that these feedback processes imply a "self-regulating" movement in which the goal to be achieved is not external to the process but involved in the process it takes part in transforming. It is present as a prediction directing the purposive behavior. Purposive behavior is neither fully conscious nor fully automatic. It is a "black-box" which takes an input and turns it into an output, without needing to know exactly how it happened for it to happen.

Behavior is composed of multiple levels which can neither be subsumed under the category of pure choice nor of pure mimesis. We can only account for this disparity through the socio-technical mediations involved in action, that is to say the ethical dimension. Is it the same thing to know you want to watch a given movie, to put aside time to watch it, and to feel like watching a movie without knowing which one in particular? In both cases the behavior is purposive, but in the first case everything done will be geared towards "watching that movie" (if it's really important to you, you may even organize your day and your social life around it). It may appear here that this behavior is the product of a willful choice and owes nothing to mimesis, but perhaps the reason why watching this movie is so important is that someone you hold in high esteem said "what, you never saw that movie?" In the second case, we are looking for a movie that would satisfy a general feeling, an imprecise desire, the recommendation here has the possibility to inform a large spectrum of latent desire, and we are knowingly or not expecting it to guide us out of a state of unfocused attention to a state of focused attention. A successful recommendation is one that will have succeeded in catching and maintaining our attention.31 In any case what counts, what matters, what will hold our attention will not be the same in different behavioral contexts. A same behavior can give very different acts, and a same act can be achieved through very different behaviors.32 The value of a behavior can only be measured with respect to the purpose (which need not be fully conscious) of the act; there may be a higher purpose to seeing the movie than seeing the movie.

Moments of choice and decision help give consistency to what we like to call a subject. They are events that help us locate subjectivity, but they are only the traces of the self's movement. As Simondon tells us, an act is transductive. It has no center but only limits, and its value can be measured by the extent to which it spreads throughout the self and not simply by how many individuals it impacts. Choice is a borderline moment in which the subject realizes its own limitations insofar as it is confronted with a field of preexisting acts from which it must choose. "Choice", says Simondon, "is the discovery and the institution of the collective."33 Choice is not the sole act of the subject, but in choosing, the subject is actually engaging with a collective field of action out of which its act emerges and with which it must resonate. "Ontologiquement, tout vrai choix est réciproque et suppose une opération d'individuation plus profonde qu'une communication des consciences ou une relation intersubjective. Le choix est opération collective, fondation de groupe, activité transindividuelle."34 Choice is discovered in action: the subject is constantly discovering its preferences, its wants and needs, its sense of values by acting in specific situations. Choices are expressions of the collective life the subject bears. In this light, it could be said that a recommendation is not informing the choices of an individual, but of a collective, a more-than-individual. For Simondon, this understanding of subjectivity corresponds to a radical shift in the way we should consider ethical problems: ethics is concerned with the value of an act, not as inducing a unifying rule of action from a plurality of occurrences, nor as deducing specific decisions from an overarching principle, but the transduction of an act into other acts. In other words it is an act's ability to constitute a network of acts with which it resonates that gives it value and meaning.35 The act of choosing is less about choosing for ourselves than it is about choosing for others, or rather in making choices we are constantly contributing to the conditions of others' actions. A recommendation could then be seen as a choice turning back on itself, of feeding back into itself. This would undoubtedly be a promising direction in which to push algorithmic design of recommender systems. For such a process to be considered a technique of the self there must be room for experiencing the movement of the self, coming back to itself from a different perspective.

4. Perspectives for Algorithmic Design

Recommendation algorithms may be an opportunity for reinterpreting the interplay of technology and subjectivity. It would seem that these algorithmic apparatuses represent a certain number of challenges for thinking the normativity of a recommendation and how it interacts with our behaviors. If we take seriously the fact that recommendations are predictions waiting to be fulfilled, potential relationships waiting to be realized, then we need to determine to what extent recommendation algorithms can be techniques of the self, insofar as the recommendation would be means for acting upon the self and the self as a way of informing the recommendation. Of course for this to happen, a number of preconditions must be fulfilled. The obvious ones are that the algorithms be open-source and non-proprietary. But these are concerns which do not concern the vast majority of users so long as our technical culture remains in a state of general atrophy. Even to those for whom it is a concern, it is necessary but insufficient.

What needs to be overcome is a two-sided fetish. First, algorithmic design seems convinced that accessing the behavioral level of interaction is coincidental with a greater level of truth and authenticity, as if the subject were incapable of speaking honestly of itself. Technology is seen as a producer of transparency, of more visibility, and reactivates some of the dreams of social physics. Conversely, critics of the "black-box" normativity imagine that being able to access the code, the written structure of the algorithm, we will unveil something of its essence, its truth, its alethea, which could be seen as a sort of Protestant obsession with scripture, with the written word. Technology here is seen as opaque and needing to be rendered transparent. So there are two conflicting ideas of technological transparency: the one being that the user should see through the technique so as to behave as natural as possible without taking it into account; the other being that technology needs to be unveiled and exposed in order to have a "real" rapport with it. Behaviors are composed of multiple levels, and just as we do not need to directly tinker with someone's central nervous system when talking to them, we do not need to access the computational or algorithmic level to partake in the recommendation process. Rather what needs to be considered, are possibilities for the users to intervene on their own behaviors by intervening on others'. Can we design interfaces in which the preferences we express, the behaviors we adopt, the choices we make are ways of participating actively in collectives? In other words, can we design interfaces which contribute to slowing down processes of selection and mimesis by adding levels of mediation?

My claim is that users do not need to see through the algorithm nor see the actual workings of the algorithm, but that they need to be able to see themselves when using the algorithm: how their traces are being used to inform others' behaviors, and how others' traces are being used to inform their own. When using the technique, the user must be able to experiment with itself; this implies that the feedback mechanisms cannot be implicit or hidden, but that they need to be that which is experienced. Ideally a recommendation should be experienced as a difference driving our choices; the difference being the sign of the collective.

Notes

  1. This could, to a certain extent, be considered in Tardean terms, see Gabriel Tarde: Les Lois de l'imitation, Paris: Félix Alcan 1895. [^]
  2. See, for example, Nicholas Diakopoulos: Algorithmic Accountability: On the Investigation of Black Boxes, in: Tow Center for Digital Journalism, URL: http://towcenter.org/research/algorithmic-accountability-on-the-investigation-of-black-boxes-2 (Dec 2, 2014); Frank Pasquale: The Black Box Society, Cambridge, MA: Harvard University Press 2015; Christian Sandvig et al.: Auditing algorithms: Research methods for detecting discrimination on internet platforms, paper presented at the 64th Annual Meeting of the International Communication Association in Seattle on May 22, 2014. [^]
  3. Nick Seaver: Knowing algorithms, in: Media in Transition, 8 (2013), p. 9–10; Mike Ananny and Kate Crawford: Seeing Without Knowing: Limitations of the transparency ideal and its application to algorithmic accountability, in: New Media and Society (Dec 13, 2016), DOI: https://doi.org/10.1177/1461444816676645; Tyler Reigeluth: L'algorithmique a ses comportements que le comportement ne connaît pas, in: Multitudes, 62 (2016), URL: http://www.multitudes.net/lalgorithmique-a-ses-comportements-que-le-comportement-ne-connait-pas/. [^]
  4. Gilles Deleuze: Post-scriptum sur les sociétés de contrôle, in: Pourparlers, Paris: Editions de Minuit 1990, pp. 240–247. [^]
  5. Y. Koren, R. Bell and C. Volinsky: Matrix Factorization Techniques for Recommender Systems, in: Computer, 42/8 (2009), pp. 30–37, DOI: https://doi.org/10.1109/MC.2009.263. (The authors are the winners of the 2009 Netflix prize, which was attributed to the research team which best improved Netflix's recommendation algorithm). [^]
  6. Sumit Chopra and Suhrid Balakrishnan: Collaborative Ranking, in: WSDM '12 Proceedings of the fifth ACM international conference on Web search and data mining, Seattle 2012, pp. 143–152, here p. 143. [^]
  7. Gautam Schroff: The Intelligent Web: Search, Smart Algorithms, and Big Data, Oxford: Oxford University Press 2015, p. 118. [^]
  8. "Distortion" or "corruption" alludes to the understanding of behavior as natural signaling processes analogue to that of digital signal processing and transmission. [^]
  9. Dominique Cardon: Dans l'esprit du PageRank, in: Réseaux, 31/177 (2013), p. 80. [^]
  10. On a further and very contemporary level, this market transparency, fueled by behavioral disinhibition, can be seen in Amazon's (already a central driver of disintermediation) plan to deploy supermarkets where the customer interacts "directly" with goods by means of a slew of behavioral technologies and avoids shopping mediations such as check-out counters. [^]
  11. Sandy Pentland: Honest Signals, Cambridge, MA: MIT Press 2008, pp. 3–4. [^]
  12. Laurent Deveau and Corina Paraschiv: Le rôle des agents intelligents sur l'Internet, in: Revue française de gestion, 30/152 (2004), pp. 15–16. [^]
  13. To a large extent this attempt to go beyond the subjective/objective opposition of behavior is what Merleau-Ponty undertakes in: La structure du comportement, Paris: Presses Universitaires de France 2009 [1942]. I will not address this here, but it is a latent influence of my research. [^]
  14. Paul Erickson, Judy Klein, Lorraine Daston et al.: How Reason Almost Lost Its Mind, Chicago: University of Chicago Press 2013, p. 118. [^]
  15. Ibid., p. 123. [^]
  16. Ibid., p. 45. [^]
  17. This self-restraint and limitation of rationality as being immanent to the activity in question is something that has been substantially underlined by Thomas Berns in his characterization of new forms of normativity as governing from the real rather than governing the real. See Thomas Berns: Gouverner sans gouverner. Une archéologie politique de la statistique, Paris: Presses Universitaires de France 2009. [^]
  18. Significantly, another one of Pentland's books is: Social Physics, New York: Penguin Books 2014. [^]
  19. This is something Pentland's team is also developing in the field of Computer Vision for modeling human interactions in "unconstrained environments" in which synthetic agents (i.e. Bayesian machine-learning algorithms used to model human behavior) "mimic" human behavior in a "virtual environment" all the while being able to recognize a novel or rare behavior pattern. See Nuria Oliver, Barbara Rosario and Alex Pentland: A Bayesian Computer Vision System for Modeling Human Interactions, in: IEEE Transactions on Pattern Analysis and Machine Intelligence, 22/8 (2000), pp. 831–843. [^]
  20. James Davidson et.al.: The YouTube Video Recommendation System, in: Proceedings of the 2010 ACM Conference on Recommender Systems (RecSys 2010), p. 294, DOI: https://doi.org/10.1145/1864708.1864770. [^]
  21. Maria Mercanti-Guérin: L'amélioration du reciblage par les Big Data: une aide à la décision qui menace l'image des marques? in: Revue internationale d'intelligence économique, 5/2 (2013), pp. 153–165. [^]
  22. Christiane Sowadogo: The Rise of Ultra-tailored Advertising, in: Annales des Mines – Réalités industrielles, 3 (2014) p. 59. [^]
  23. Michel Foucault: L'Herméneutique du sujet, Paris: Seuil/Gallimard 2001, p. 241. English trans. by Graham Burchell (New York: Palgrave 2005, p. 252): "[…] if we take the question of power, of political power, situating it in the more general question of governmentality understood as a strategic field of power relations in the broadest and not merely political sense of the term, if we understand by governmentality a strategic field of power relations in their mobility, transformability, and reversibility, then I do not think that reflection on this notion of governmentality can avoid passing through, theoretically and practically, the element of a subject defined by the relationship of self to self." [^]
  24. Ibid., p. 238 ["The two elements we must try to disentangle are movement and return; the subject's movement towards himself and the self's turning back on itself." (p. 248)]. [^]
  25. Muriel Combes: La vie inséparée, Paris: Editions Dittmar 2002, p. 69. My Engl. trans.: "The concept of the self, even more so than that of the subject it would seem, can never be distinguished from a relationship." "What Foucault's analysis of techniques of the self shows, is that before naming the subject itself, 'self' is the name given to a relational potentiality." [^]
  26. Foucault: L'Herméneutique du sujet, p. 123. ["In the practice of the self, someone else, the other, is an indispensable condition for the form that defines this practice to effectively attain and be filled by its object, that is to say, by the self." (p. 127)] Also see the passage a few pages later: "Entre l'individu stultus et l'individu sapiens, l'autre est nécessaire. Ou encore: entre l'individu qui ne veut pas son propre soi et celui qui sera arrivé à un rapport de maîtrise sur soi, de possession de soi, de plaisir à soi, qui est en effet l'objectif de la sapientia, il faut que l'autre intervienne. Car structurellement si vous voulez, la volonté, caractéristique de la stultitia, ne peut pas vouloir se soucier de soi. Le souci de soi par conséquent nécessite bien, vous le voyez, la présence, l'insertion, l'intervention de l'autre." (p. 129) ["Between the stultus individual and the sapiens individual, the other is necessary. Or again, intervention by the other is necessary between, on the one hand, the individual who does not will his own self and, on the other, the one who has achieved a relationship of self-control, self-possession, and pleasure in the self, which is in fact the objective of sapientia. For structurally, if you like, the will that is typical of stultitia is unable to want to care about the self. The care of the Self consequently requires, as you can see, the other's presence, insertion, and intervention." (p. 133–134)] Interestingly, this idea of "a will too weak to act for its own good" or akresia is taken up by Erickson et al. in their description of Cold War rationality intent on creating the "situation" within which rules could be followed: "Whether played out in microcosm or macrocosm, the problem was perceived as the same: how to forge the internal consistency of society and self that would make the world safe for the rationality of rules." (Erickson: How Reason Almost Lost Its Mind, p. 50). [^]
  27. Georges Canguilhem: Le problème des régulations dans l'organisme et la société, in: Ecrits sur la médecine, Paris: Editions du Seuil 2002; Andrea Bardin: La société, 'machine autant que vie'. Régulation et invention politique entre Wiener, Canguilhem et Simondon, in: Vincent Bontems (ed.): Gilbert Simondon ou l'invention du futur, Paris: Klincksieck 2016, pp. 31–44. In this regard, it is rather telling that Foucault locates government for its own sake, for its own self, in the doctrine of the "Raison d'Etat" whereby government is seen as the sole prerogative of the State, the interest of which supersedes, in last resort, all other interests. [^]
  28. Arturo Rosenblueth, Norbert Wiener and Julian Bigelow: Behavior, Purpose and Teleology, in: Philosophy of Science, 10 (1943), p. 3. [^]
  29. Norbert Wiener: Cybernetics or, Control and Communication in the Animal and the Machine, Eastford, CT: Martino Fine Books 2013 [1948], pp. 6–7. [^]
  30. Ibid., p. 7. [^]
  31. In a similar way, Jonathan Crary frames the problem of "attention" in his book: Suspensions of Perception: Attention, Spectacle and Modern Culture, Cambridge, MA: MIT Press 1999. [^]
  32. This is something the sociologist of science Harry Collins has discussed at length, namely in: Artificial Experts, Social Knowledge and Intelligent Machines, Cambridge, MA: MIT Press 1990. [^]
  33. Gilbert Simondon: L'individuation à la lumière des notions de forme et d'information, Grenoble: Millon 2013, p. 300. [^]
  34. Ibid., p. 301. My Engl. trans.: "Ontologically speaking, all actual choice is reciprocal and supposes an operation of individuation, which is deeper than the communication of consciousnesses or an intersubjective relationship. Choice is a collective operation, a group foundation, a transindividual activity." [^]
  35. Ibid., p. 323. [^]

Bibliography

Ananny, Mike and Kate Crawford. Seeing Without Knowing: Limitations of the transparency ideal and its application to algorithmic accountability. In: New Media and Society (Dec 13, 2016). DOI:  http://doi.org/10.1177/1461444816676645

Bardin, Andrea. La société, 'machine autant que vie'. Régulation et invention politique entre Wiener, Canguilhem et Simondon. In: Vincent Bontems (ed.): Gilbert Simondon ou l'invention du futur, Paris: Klincksieck 2016.

Canguilhem, Georges. Le problème des régulations dans l'organisme et la société. In: Ecrits sur la médecine, Paris: Editions du Seuil 2002.

Cardon, Dominique. Dans l'esprit du PageRank. In: Réseaux, 31/177 (2013).

Chopra, Sumit and Suhrid Balakrishnan. Collaborative Ranking. In: WSDM '12 Proceedings of the fifth ACM international conference on Web search and data mining, Seattle 2012.

Collins, Harry. Artificial Experts, Social Knowledge and Intelligent Machines, Cambridge, MA: MIT Press 1990.

Combes, Muriel. La vie inséparée, Paris: Editions Dittmar 2002.

Davidson, James et. al. The YouTube Video Recommendation System. In: Proceedings of the 2010 ACM Conference on Recommender Systems (RecSys 2010), p. 293-296, DOI:  http://doi.org/10.1145/1864708.1864770

Deleuze, Gilles. Post-scriptum sur les sociétés de contrôle. In: Pourparlers, Paris: Editions de Minuit 1990, pp. 240–247.

Deveau, Laurent and Corina Paraschiv. Le rôle des agents intelligents sur l'internet. In: Revue française de gestion, 30/152 (2004), pp. 7-34.

Diakopoulos, Nicholas. Algorithmic Accountability: On the Investigation of Black Boxes. In: Tow Center for Digital Journalism, URL: http://towcenter.org/research/algorithmic-accountability-on-the-investigation-of-black-boxes-2 (Dec 2, 2014).

Erickson, Paul, Judy Klein, Lorraine Daston et al. How Reason Almost Lost Its Mind, Chicago: University of Chicago Press 2013.

Foucault, Michel. L'Herméneutique du sujet. Paris: Seuil/Gallimard 2001.

Koren, Y., R. Bell and C. Volinsky Matrix Factorization Techniques for Recommender Systems. In: Computer, 42/8 (2009), pp. 30–37. DOI:  http://doi.org/10.1109/MC.2009.263.

Mercanti-Guérin, Maria. L'amélioration du reciblage par les Big Data: une aide à la décision qui menace l'image des marques? In: Revue internationale d'intelligence économique, 5/2 (2013), pp. 153–165.

Oliver, Nuria, Barbara Rosario and Alex Pentland. A Bayesian Computer Vision System for Modeling Human Interactions. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 22/8 (2000), pp. 831–843. DOI:  http://doi.org/10.1109/34.868684

Pasquale, Frank. The Black Box Society, Cambridge, MA: Harvard University Press 2015.

Pentland, Sandy. Honest Signals. Cambridge, MA: MIT Press 2008.

Reigeluth, Tyler. L'algorithmique a ses comportements que le comportement ne connaît pas. In: Multitudes, 62 (2016). URL: http://www.multitudes.net/lalgorithmique-a-ses-comportements-que-le-comportement-ne-connait-pas/.

Rosenblueth, Arturo, Norbert Wiener and Julian Bigelow. Behavior, Purpose and Teleology. In: Philosophy of Science, 10 (1943), pp. 18-24.

Sandvig, Christian et al. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Paper presented at the 64th Annual Meeting of the International Communication Association in Seattle on May 22, 2014.

Schroff, Gautam. The Intelligent Web: Search, Smart Algorithms, and Big Data. Oxford: Oxford University Press 2015.

Seaver, Nick. Knowing algorithms. In: Media in Transition, 8 (2013).

Simondon, Gilbert. L'individuation à la lumière des notions de forme et d'information, Grenoble: Millon 2013.

Sowadogo, Christiane, The Rise of Ultra-tailored Advertising. In: Annales des Mines – Réalités industrielles, 3 (2014).

Tarde, Gabriel. Les Lois de l'imitation, Paris: Félix Alcan 1895.

Wiener, Norbert. Cybernetics or, Control and Communication in the Animal and the Machine. Eastford, CT: Martino Fine Books 2013 [1961].