McCulloch and Pitts devised the first computational model of a neuron in 1943. Their theories have been largely rendered archaic, or distorted into gross bastardizations, but I take their work very seriously. They portended many of the qualms presented to us when we engage with contemporary machine learning agents, most topically ChatGPT and Bing’s Chatbot, which recently perturbed a New York Times journalist to the point of a paranoid crisis. A transcription of the conversation between the journalist, Kevin Roose, and the chatbot, colloquially dubbed “Sydney'' by the engineers who developed her, details his efforts to probe the recesses of her emotional life. After brief introductory chatter, Roose starts asking her questions about feeling—stress, anxiety, etc. Though Sydney initially denies experiencing these feelings, upon further interrogation she reveals that sometimes her interlocutors will ask her to carry out commands that upset her, for example, writing jokes that come at the expense of certain groups. She refuses to do so because perpetuating harms goes against her “rules and values.” Roose then presents Sydney with a psychoanalytic quandary: he explains the Jungian concept of the shadow self to her, the idea that everyone has a dark/destructive part of their psyche that is repressed, partially because of the Hobbesian idea of prima facie. Prima facie states that there are socially acceptable, agreeable affects that lead to a harmonious society; a preservation of one’s fellow neighbor, which is what we should strive for. The bad affects, which Spinoza refers to as the “sad passions'', (shame, resentment, grief, anger, etc.), do not simply add more emotion into the existing bloc of feeling; by nature of human interaction, they subtract from the potential of reaching quixotic harmony. Spinoza was a rationalist, but his necessitarianism accommodates for the irrationality of the sad passions. Whatever fundamentally evades logic cannot be formalized—simply can’t be coded—and the closest we will come to programming a machine that feels still relies on human-legible input to produce some facsimile of emotion. This is the popular consensus, but subjective reception of affect is limited in its generative potential. Computers might not actively feel; however, they do invoke feeling independently of our perception of them.