Bryson’s “Patiency Is Not a Virtue” argues that the ethical status of artificial intelligence is not a discoverable fact about machines, but a normative design decision for which humans remain responsible. The paper rejects the assumption that advanced intelligence, linguistic capacity or social responsiveness automatically entitles AI to moral agency or patiency, insisting instead that ethical systems and artefacts are co-constructed by societies. Bryson’s central claim is that because machines are designed objects, the question is not merely what moral status they deserve, but what kinds of entities we ought to build in the first place. Her account of morality centres on socially recognised choice, sanctioned action and responsibility, yet she argues that attributing such responsibility to AI would often function as an evasion of human accountability. The case of robotics is therefore not analogous to natural moral subjects: unlike children, animals or other humans, artificial agents can be specified so that they do not suffer, compete for status, fear death or require moral protection. The discussion of the EPSRC Principles of Robotics synthesises this position, especially the claims that robots are tools, humans are responsible agents, robots are products, machine nature should remain transparent, and legal responsibility should remain attributable. Ultimately, Bryson concludes that AI should be treated as a manufactured extension of human agency rather than as an autonomous moral subject. Her decisive ethical proposition is that creating machines to which we owe patiency would be avoidable, disruptive and morally incoherent, because it would shift responsibility away from the humans and institutions that design, deploy and profit from them.