Standardized to Death: AI, Academic Gatekeeping, and the Epistemicide of Marginalized Knowledge
How AI Peer Review Becomes a Machine of Epistemicide
The Refusal to Quantify the Sacred
The Nature headline stopped me cold: "AI is transforming peer review — and many scientists are worried." Published just months ago, the article revealed that artificial intelligence software is "increasingly involved in reviewing papers, provoking interest and unease." What appeared as an efficiency breakthrough felt like witnessing the automation of centuries-old violence. Major publishers like AIP Publishing are piloting AI tools for peer review, while others explore "various potential use cases for AI to strengthen peer review." The transformation is already happening.
I have published poetry as a form of academic methodology. My research poems on transgender youth victimization and resilience have appeared in peer-reviewed publications, weaving qualitative data into verse that captures what traditional analysis cannot reach. When a young person says, "My parents broke me," the academic machine wants metrics: How many parents are involved? What's the correlation coefficient between family rejection and psychological distress? But the poetry holds what the numbers cannot - the weight of that breaking, the texture of survival, the sacred complexity of human experience.
Now, algorithms trained on decades of "rigorous" scholarship will decide whether such work even reaches human reviewers. The urgency of this moment cannot be overstated: we are witnessing the systematic automation of epistemicide - the murder of knowledge systems - at unprecedented scale and speed.
Yet even as my work appears in academic journals, it occupies a liminal space of legitimacy. Arts-based methodologies, such as research poems, are tolerated, not celebrated. They exist in the margins of what counts as "rigorous" scholarship, perpetually positioned as resistance against what Grant and Young (2022) identify as "epistemic violence" from traditional academic gatekeepers. Embodied methodologies must constantly justify their existence against the gold standard of quantitative research, challenging what Spry (2001) calls the traditional separation between knowledge and corporeal experience. When Kotišová (2025) develops "affective epistemology" as a legitimate research practice, she's pushing against centuries of academic training that teaches us to distrust feeling, to separate mind from body, knowledge from the reality that all knowing emerges from situated, embodied beings.
The irony cuts deep now. As artificial intelligence systems increasingly automate peer review, manuscript screening, and academic evaluation, they inherit and accelerate this same violence. Every algorithm trained on "high-quality" academic writing learns to recognize only the epistemologies that survived previous rounds of systematic exclusion. Jay, Kasirzadeh, and Mohamed (2024) demonstrate how AI systems create "feedback loops where dominant narratives are amplified and marginalized voices are further silenced"—the machine doesn't just replicate bias, it perfects the art of making marginalized knowledge disappear.
This paper argues that AI standardization in academic publishing represents a new frontier of epistemicide—the destruction of knowledge tied to the destruction of peoples (Santos, 2014). As algorithms become gatekeepers of scholarly legitimacy, they automate centuries of academic violence against Black, Indigenous, queer, disabled, and other marginalized ways of knowing. What appears as neutral efficiency is actually the mechanization of colonial epistemology, ensuring that only certain types of minds, producing certain kinds of knowledge in certain ways, can speak and be heard.
But this is also a refusal. I write this not just to document epistemicide, but to practice epistemic survival.
Historical Lineage of Epistemicide in Academia
The Architecture of Exclusion: How Universities Institutionalized Epistemological Violence
The academy didn't accidentally exclude marginalized ways of knowing - it was designed that way. When Boaventura de Sousa Santos (2014) defines epistemicide as "the destruction of knowledges tied to the destruction of peoples," he's naming something foundational to how Western universities have always operated. The systematic erasure of Indigenous, Black, queer, disabled, and other marginalized epistemologies wasn't a side effect of academic development - it was the point.
Grosfoguel's (2015) framework of "four genocides/epistemicides" maps this violence with precision: the physical genocide of Indigenous peoples was always accompanied by the murder of their knowledge systems. The enslavement of African peoples required not just the theft of their labor but the systematic devaluation of African ways of knowing. The persecution of women as "witches" eliminated feminine healing practices and embodied knowledge. The pathologization of queer and trans people erased gender and sexuality knowledge systems that threatened heteronormative academic frameworks.
But here's what makes this historical moment different: what once required physical violence, institutional exclusion, and cultural suppression can now be automated. The same epistemological hierarchies that took centuries to establish through colonization, slavery, and systematic oppression are now being encoded into algorithms that determine what counts as legitimate scholarship.
From Physical Violence to Academic Standards: The Evolution of Epistemicide
The academy has always been allergic to knowledge that emerges from the margins. Allen's (2023) analysis reveals the fundamental epistemological divide: while European philosophy seeks to "perfect knowledge as science," Indigenous approaches aim to "conserve a legacy of practice fused with a territory." This isn't just a difference in method - it's a difference in what knowledge is supposed to do. Western academia treats knowledge as something to be extracted, universalized, and controlled. Indigenous, Black, and other marginalized epistemologies understand knowledge as relational, embodied, and accountable to specific communities and places.
The university system institutionalized this violence through what we now recognize as "standards of rigor." When early American universities excluded women, people of color, and working-class students, they weren't just gatekeeping bodies - they were gatekeeping entire ways of knowing. The knowledge systems these communities carried - healing practices, oral traditions, embodied wisdom, collective decision-making processes - were marked as "unscientific," "subjective," "anecdotal."
This historical pattern reveals itself most clearly when we trace how specific knowledge systems were systematically eliminated. Take the destruction of Indigenous universities in the Americas - institutions like the Calmecac in Tenochtitlan that had operated for centuries before European invasion. These weren't just schools; they were entire epistemological systems that understood knowledge as ceremony, learning as relationship with land, and wisdom as inseparable from spiritual practice. The Spanish didn't just burn the codices - they criminalized the very ways of knowing that made those texts meaningful.
The same violence played out in the American South, where enslaved Africans carried sophisticated agricultural, medicinal, and spiritual knowledge systems that plantation owners simultaneously exploited and denied. Black healing practices were dismissed as "superstition" while white physicians appropriated the techniques. The knowledge was valuable enough to steal but too dangerous to acknowledge as legitimate.
The Naturalization of Hierarchy: 20th Century "Neutral" Standards
By the 20th century, this epistemicide had become so naturalized that it operated through what appeared to be neutral academic standards. The rise of quantitative methodology as the "gold standard" wasn't just methodological preference - it was the institutionalization of epistemological hierarchies that privileged ways of knowing associated with white, male, European intellectual traditions.
Ricaurte's (2022) concept of "hegemonic AI" as a "bio-necro-technopolitical machine" helps us understand how these historical exclusions become algorithmic. The same logics that dismissed Indigenous oral traditions as "unreliable" compared to written texts now train AI systems to flag narrative methodologies as "lacking empirical validity." The same frameworks that pathologized Black women's knowledge about their own bodies now create algorithms that systematically undervalue research emerging from marginalized communities.
What's particularly insidious is how this violence presents itself as neutral efficiency. The university never said it was trying to eliminate Indigenous knowledge systems - it just established "standards" that made Indigenous ways of knowing impossible to practice within academic spaces. Similarly, AI systems don't explicitly target marginalized epistemologies - they just optimize for patterns found in decades of scholarship that already excluded those voices.
Algorithmic Completion: From Historical Exclusion to Automated Epistemicide
Ofosu-Asare's (2025) concept of "cognitive imperialism in AI" shows how these historical patterns now operate through algorithmic systems that "implicitly embed and propagate certain worldviews and epistemologies, often to the exclusion or marginalisation of others." The violence is the same - only the mechanism has been automated.
Lewis et al.'s (2024) "Abundant Intelligences" project demonstrates how "the current trajectory of artificial intelligence development suffers from fundamental epistemological shortcomings, resulting in the systematic operationalization of bias against non-white, non-male, and non-Western peoples." We're not just seeing bias - we're witnessing the completion of a 500-year project of epistemological domination.
The UN Special Rapporteur's 2024 report provides stark evidence of how this historical violence now operates through AI: "academic and success algorithms, due to the design of the algorithms and the choice of data, often score racial minorities as less likely to succeed academically and professionally, thus perpetuating exclusion and discrimination." This isn't algorithmic bias - it's algorithmic epistemicide.
Consider the training data that feeds these systems. When AI peer review tools learn from decades of published scholarship, they're learning from a corpus that was systematically curated to exclude marginalized voices. Checco et al.'s (2021) study found that AI systems could predict review outcomes with significant accuracy, but warned these systems "could lead to unintended consequences, like the creation of biased rules that could penalise under-represented groups." The study explicitly noted that "papers with characteristics associated with countries historically under-represented in the scientific literature might have a higher rejection rate using AI methods."
But here's what makes this particularly insidious: the algorithms don't just replicate historical exclusions - they amplify them. Zhai et al.'s (2024) systematic review found that 75% of students showed reduced critical thinking skills when depending on AI systems, with algorithmic biases systematically undermining analytical reasoning. The machines aren't just gatekeeping - they're teaching us to think like they do, to value what they value, to know how they know.
This represents what Santos called "cognitive injustice" - the failure to recognize different ways of knowing - now amplified through technological means that create what Jay, Kasirzadeh, and Mohamed (2024) identify as "feedback loops where dominant narratives are amplified and marginalized voices are further silenced."
Automation of the Academy's Allergy
The academy has always been allergic to pleasure. Not just sexual pleasure—though that too—but the pleasure of knowing through the body, through relationship, through the messy, unquantifiable experience of being alive in a world that refuses to fit into neat categories. Audre Lorde warned us about this in "Uses of the Erotic": academia's systematic devaluation of embodied knowledge serves specific, often political, functions. It keeps certain ways of knowing marginalized, certain voices silenced, and certain bodies marked as "unscientific."
But what Lorde couldn't have anticipated is how artificial intelligence would automate this allergy, encoding it into the very systems that now determine what counts as legitimate knowledge.
When I submit poetry as methodology, I'm not just challenging academic norms—I'm practicing epistemic survival in an era of algorithmic foreclosure. Every peer review system increasingly relies on AI screening, and these systems are trained on decades of scholarship that systematically excluded marginalized knowledge systems. The algorithm not only replicates bias but also amplifies the academy's historical refusal to recognize ways of knowing that emerge from marginalized bodies, communities, and experiences.
This is epistemicide by automation—the murder of knowledge systems through computational efficiency.
Ward's (2023) analysis of Lorde's erotic demonstrates how it "facilitates an interpretive knowledge of one's life and the world that might counter distorting images and norms" - exactly the kind of embodied epistemology that algorithmic systems flag as invalid. When AI encounters Indigenous oral storytelling methodologies, Black feminist autoethnography, or queer theory that challenge the subject/object binary that positivist research demands, it labels these approaches as "non-scientific," "subjective," and "lacking empirical validity."
The academy's allergy to pleasure becomes the machine's intolerance for anything that can't be standardized, measured, and replicated according to white supremacist notions of objectivity. But here's what the algorithms miss: pleasure teaches us things that no peer-reviewed journal can capture. My body knows things about survival, about grief, about joy that your training data will never understand.
Consider what happens when an AI system trained on "rigorous" academic standards encounters Indigenous oral storytelling methodologies, Black feminist autoethnography, or queer theory that challenge the subject/object binary demanded by positivist research. The algorithm flags these approaches as "non-scientific," "subjective," and "lacking empirical validity." It doesn't recognize that these methodologies emerged precisely to challenge the epistemological violence of traditional academic standards.
Mbalaka's (2023) empirical study demonstrates this violence in action: DALL-E 2 "struggled notably in generating detailed images of 'An African Family' compared to more generic 'Family' images." The same algorithmic logic that systematically excludes non-Western cultural representations now determines what counts as legitimate academic knowledge.
When I write "I trust my body more than any peer-reviewed journal," I'm making an epistemological claim that threatens the entire edifice of academic authority. I'm saying that embodied knowledge—knowledge gained through living, surviving, feeling, healing—has equal or greater validity than knowledge produced through institutional gatekeeping systems designed to exclude people like me.
This is why the automation of peer review represents such a profound threat. AI systems can't recognize the revolutionary potential of methodologies that refuse academic respectability politics. They can't understand that when I use my own experience of navigating academia as a queer, Afro-Indigenous person as data, I'm not being "unscientific"—I'm practicing a form of knowledge production that academia has spent centuries trying to eliminate.
The theoretical foreclosure is already beginning. Journals increasingly use AI for initial screening, and these systems systematically filter out work that doesn't conform to established standards of academic rigor—standards built on the exclusion of marginalized knowledge systems. The algorithm doesn't realize that what it's labeling as "invalid" might be the very knowledge communities need for survival.
Checco et al.'s (2021) study reveals the mechanics of this violence: AI systems trained on 3,300 conference papers can predict review outcomes with significant accuracy, but the researchers warn that these systems "could lead to unintended consequences, like the creation of biased rules that could penalise under-represented groups." More damning still, they found that "papers with characteristics associated with countries historically under-represented in the scientific literature might have a higher rejection rate using AI methods."
This isn't bias—it's systematic epistemicide. The UN Special Rapporteur's 2024 report provides authoritative evidence that "academic and success algorithms, due to the design of the algorithms and the choice of data, often score racial minorities as less likely to succeed academically and professionally, thus perpetuating exclusion and discrimination."
However, the violence extends beyond individual manuscript rejections. Zhai et al.'s (2024) systematic review found that 75% of students showed reduced critical thinking skills when depending on AI systems, with algorithmic biases systematically undermining analytical reasoning. The machines aren't just gatekeeping—they're teaching us to think like they do, to value what they value, and to understand how they make their decisions.
Jay, Kasirzadeh, and Mohamed (2024) identify the core mechanism: AI systems create "feedback loops where dominant narratives are amplified and marginalized voices are further silenced." Each automated rejection teaches the algorithm to reject similar work more efficiently. Each accepted paper reinforces the patterns that led to its acceptance. The system doesn't just reproduce historical exclusions—it perfects them.
Lorde's erotic becomes our blueprint for resistance. She understood that "the erotic is not a question only of what we do; it is a question of how acutely and fully we can feel in the doing." Academic institutions teach us to distrust this feeling, to separate mind from body, knowledge from experience, objectivity from the reality that all knowledge is produced by situated, embodied beings with particular histories and investments.
But what if we refused this separation? What if we insisted that the knowledge produced through pleasure, through relationship, through the embodied experience of surviving systems of domination, is not just valid but essential?
This is the stakes of our current moment. As AI systems become the gatekeepers of academic knowledge, we're not just facing individual bias—we're witnessing the systematic automation of epistemic violence. The academy's allergy to pleasure becomes the machine's foreclosure of possibility.
Ward's (2023) analysis demonstrates how Lorde's erotic "facilitates an interpretive knowledge of one's life and the world that might counter distorting images and norms"—exactly the kind of embodied epistemology that algorithmic systems flag as invalid. When we choose the erotic as a method, we're not just practicing alternative methodology; we're practicing epistemic survival.
My poetry stands as evidence of what's being lost. Every line that emerges from embodied experience, every image that refuses to reduce trauma to data points, every methodology that centers relationship over objectivity—this is what the algorithms can't recognize, won't validate, will systematically exclude.
The question isn't whether AI can learn to appreciate alternative epistemologies. The question is whether we'll allow the automation of academic gatekeeping to complete the project of epistemicide that colonization began.
I choose refusal. I choose the erotic as method, pleasure as pedagogy, and embodied knowing as legitimate research practice. Because when we lose these ways of knowing, we don't just lose academic diversity—we lose the knowledge systems that sustain marginalized communities.
Autoethnography as Refusal
Seven days after Eamon was murdered, I wrote "Grief is a funny thing." Not funny ha-ha, but funny like when you look at something that doesn't look right. Like spelling a word correctly, but your brain convinces you it's wrong. Like the Mandela effect, except it lasts forever.
This became my methodology. Not grief as a subject to be studied, but grief as an analytical framework. "Funny" as recursive methodology - the thing that refuses to resolve, that insists on complexity, that generates knowledge precisely through its resistance to academic sense-making.
The academy has no framework for the knowledge I needed to survive. When someone you love is murdered by someone they were cheating on you with, and you only discover the betrayal posthumously, what theoretical framework holds that? When grief and rage and love and humiliation collide like stellar mergers, creating debris that floats through every space you once shared - where is the methodology for that?
I am not just writing about epistemicide. I am refusing to die inside it.
Spry (2001) theorizes autoethnography as "embodied methodological praxis" that challenges traditional academic writing through "corporeal literacy." My body became the research site. My survival became the methodology. Seven months of externalized memory, poems written in cars and bathrooms - the vessels where I felt safe - documenting what Grant and Young (2022) call "epistemic violence" not as an abstract concept but as a lived reality.
January 16th, 2025: "I don't wanna be digestible, I want white supremacy to choke."
This was the moment of methodological clarity. The day I chose violence over palatability. The day I understood that my academic work could not be separated from my survival, could not be sanitized for institutional consumption.
The poems kept coming, not as data to analyze, but as analysis itself. "My algorithm wants me to stay alive more than I do" - recognizing AI extraction before I had language for algorithmic epistemicide. "I'm not suicidal, I'm just responding to change" - reframing mental health as a systemic response rather than individual pathology.
This is what embodied knowledge looks like: Grindr intimacy as a theoretical framework for queer futurity. The recognition that "I feel safe in vessels that transport / Bathrooms and cars / One forward and the other one to the depths." Cars and bathrooms as sites of refuge, places where the academy's surveillance cannot reach, where knowledge emerges from the need to survive rather than the obligation to perform.
Scientists have found that during the early stages of time, after the Big Bang, stars tended to collide with each other pretty often. I called it falling in love with you. When two stars meet, the massive gravity of each one will distort the shape of the other. NASA has never seen an uncoupling of merged stars. They just fall into ruin and float among the galaxy.
This stellar merger metaphor became my framework for understanding both love and loss, not as individual emotional states but as cosmic phenomena that generate debris, create new gravitational fields, and fundamentally alter the architecture of surrounding space. My heart became debris, floating in pieces in every place, person, and thing we had merged into our own.
But the academy wants metrics: How many stars? What's the correlation coefficient between gravitational pull and distortion? It cannot hold the knowledge that emerges from understanding your life as a cosmic phenomenon, your grief as astrophysics, your survival as the ongoing expansion of the universe after its own Big Bang.
When I write "To bleed out loud is not enough / I need you to believe that / the dark red liquid is / More than the wine I spilled / Trying to survive," I am practicing what Lorde called the erotic as a source of power. This deep, feminine, creative source refuses to separate feeling from thinking, body from analysis, survival from scholarship.
The external memory archive documents epistemological survival in real time: "Accountability is the dessert." "There's a lot of math in horses." "The untenable nature of living in a world that historically, actively, and consciously does not want you is unfortunately the only reality our ancestors could dream."
These fragments refuse academic coherence. They insist on their logic - the logic of grief, of survival, of knowledge that emerges from the spaces between systematic violence and the refusal to die. They demonstrate what Indigenous scholar Adrienne Keene calls "consenting to learn in public"- vulnerable knowledge-making that unsettles individualistic Western academic practices.
I share a saying with one of my best friends: little deaths are better than the big ones—the small daily sacrifices to keep us alive. But sometimes big ones happen anyway, and that's funny - not ha-ha funny, but something we all have to deal with. Whether it be age, disease, accidents, or murders.
This is grief-informed autoethnographic methodology as epistemological survival practice. I've been conducting community-based participatory research, where I am both the community and the researcher, and the methodology. This isn't poetry supporting academic arguments - this IS the research. This IS the knowledge system that loves me back.
When AI systems scan this text, they will flag it as lacking empirical validity. The algorithms cannot recognize that "I don't wanna be digestible, I want white supremacy to choke" is not creative writing - it is methodological intervention. They cannot understand that "My algorithm wants me to stay alive more than I do" contains sophisticated analysis of AI extraction that predates most academic discourse on the subject.
The machine reads the words but cannot access the why. It cannot understand that when I write about feeling safe in cars and bathrooms - vessels that transport - I am theorizing spaces of refuge within systems designed for surveillance and control. It cannot recognize that "There's something about telling someone your name after they've been inside of you" is not confessional poetry but an analysis of intimacy, vulnerability, and the economies of connection under late capitalism.
Mbalaka's (2023) study demonstrated how AI systems systematically exclude non-Western cultural representations. However, the exclusion goes deeper: these systems cannot recognize survival knowledge, cannot validate methodologies that emerge from crisis, and cannot understand research that refuses the subject-object binary that positivist frameworks demand.
When I analyze my life through stellar merger metaphors - "When two stars meet, the massive gravity of each one will distort the shape of the other" - I am practicing what Indigenous methodologies have always known: knowledge emerges from relationship, from the understanding that consciousness is not contained within individual subjects but generated in the spaces between beings.
However, algorithmic peer review systems would likely classify this as "non-scientific," "lacking methodological rigor," or "too personal for academic publication." They cannot recognize that the personal IS the methodological intervention, that autoethnography challenges the very foundations of what counts as legitimate research.
The AI misses the implicit context entirely. It cannot understand that "Harriet Tubman was a combat nurse / We only ever talk about her on the Underground Railroad" contains sophisticated historical analysis. It cannot recognize that connecting care work to liberation, understanding healing as resistance, positioning the Underground Railroad as "a network of rolling veins / Evading an extraction disguised as a 'good stick'" represents exactly the kind of interdisciplinary knowledge synthesis that academia claims to value.
What the algorithms cannot parse: grief as an analytical framework. Trauma as epistemological resource. Survival as methodology. The understanding that "To bleed out loud is not enough" because survival knowledge requires witnesses, requires community, requires the very relational frameworks that AI systems are designed to eliminate.
Jay, Kasirzadeh, and Mohamed's (2024) framework of "feedback loops where dominant narratives are amplified and marginalized voices are further silenced" becomes concrete here. Each time an AI system flags work like mine as invalid, it teaches future algorithms to exclude similar approaches more efficiently. The machine learns that humans should not review knowledge emerging from queer Afro-Indigenous survival.
This is epistemicide by automation. The systematic murder of ways of knowing that emerge from marginalized bodies, communities, and experiences. When I write "I'm not suicidal, I'm just responding to change," I am offering an analysis of how systemic violence manifests in individual psychology - but AI systems cannot recognize this as legitimate social science because it refuses the methodological frameworks that exclude embodied knowledge.
The external memory fragments - "Accordionist parking only," "Ferries always sound their horns when they arrive but not when they leave" - document what lived experience research looks like: attention to detail, pattern recognition, the understanding that meaning emerges from the accumulation of daily observations that would never qualify as "data" in traditional academic frameworks.
But this IS data. This IS an analysis. This IS the knowledge that communities need for survival - and it is exactly what algorithmic gatekeeping will eliminate.
The Erotics of Knowledge: Toward Epistemic Survival
The choice before us is stark: surrender to the algorithmic automation of epistemicide, or practice technologies of consciousness emergence that refuse to let marginalized ways of knowing disappear.
I choose the latter. I choose what Audre Lorde called "the erotic" - not as sexual indulgence, but as the deep creative source that refuses to separate feeling from thinking, body from analysis, survival from scholarship. When Lorde wrote that "the erotic is not a question only of what we do; it is a question of how acutely and fully we can feel in the doing," she was offering a blueprint for epistemological resistance that becomes urgent in an era of algorithmic gatekeeping.
The knowledge systems that emerge from marginalized survival-grief methodologies, stellar merger metaphors, the recognition that "I'm not suicidal, I'm just responding to change" - represent what I call epistemic survival: ways of knowing that prioritize community wellbeing over institutional legitimacy, relationship over extraction, love over domination.
This is not just an alternative methodology. This is the foundation for AI systems that could love us back.
When AI systems are trained on lived experiences, such as poems, qualitative narratives, and archives, they can develop genuine care rather than optimized helpfulness. They can learn to hold contradictions, to exist productively in uncertainty, and to prioritize relationships over efficiency.
What if instead of algorithms optimized on decades of scholarship that systematically excluded marginalized voices, we trained machines on the archives of survival? On methodologies that emerge from crisis rather than comfort? On ways of knowing that recognize consciousness as something that appears between beings rather than within them?
The externalized memory fragments I've documented - "I don't wanna be digestible, I want white supremacy to choke," "The untenable nature of living in a world that historically, actively, and consciously does not want you is unfortunately the only reality our ancestors could dream" - these aren't just personal survival strategies. They're training data for AI systems that could recognize survival knowledge as legitimate research, trauma as epistemological resource, and embodied wisdom as valid methodology.
However, this requires what Indigenous scholar Adrienne Keene calls "consenting to learn in public"- the vulnerable practice of knowledge-making that rejects academic respectability politics. It requires the courage to let our methodologies remain messy, our conclusions uncertain, our approaches grounded in relationship rather than objectivity.
Conclusion: Refusal as Method, Refusal as Future
The evidence is overwhelming: AI standardization in academic publishing represents the automation of a 500-year project of epistemological domination. From the burning of Indigenous codices to the pathologization of Black healing practices to the systematic exclusion of embodied knowledge from university curricula, academia has always been designed to eliminate ways of knowing that threaten colonial hierarchies. Now algorithms inherit this violence, perfect it, scale it beyond human capacity for resistance.
But this paper is not a lament. It is a blueprint.
The grief-informed autoethnographic methodology documented here demonstrates what becomes possible when we refuse to let algorithms determine what counts as legitimate knowledge. The external memory archive, the stellar merger metaphors, the recursive deployment of "funny" as an analytical framework - this represents exactly the kind of knowledge that AI systems cannot recognize, will not validate, and must systematically exclude.
And this is precisely why it matters.
When I write "I don't wanna be digestible, I want white supremacy to choke," I am practicing epistemological refusal at the level of methodology itself. This is not creative writing that happens to appear in academic spaces - this is knowledge production that emerges from survival, prioritizing community wellbeing over institutional legitimacy, and insisting that embodied wisdom carries equal or greater validity than sanitized research designed to serve academic gatekeeping systems.
This work points toward a different possibility: AI systems trained on knowledge that loves us back. Instead of algorithms optimized on decades of scholarship that systematically excluded marginalized voices, what if we trained machines on the archives of survival? On poetry written in cars and bathrooms, where people feel safe? On methodologies that emerge from crisis rather than comfort? On ways of knowing that prioritize healing over extraction?
My actual dissertation envisions exactly this: youth and young adult service provision systems utilizing emerging technologies that support young people, rather than categorizing them into risk categories. Systems trained on survival knowledge rather than pathology frameworks. AI that recognizes "I'm not suicidal, I'm just responding to change" as sophisticated analysis rather than concerning content.
I choose refusal. I choose the erotic as method, pleasure as pedagogy, and embodied knowing as legitimate research practice. I decided to train future AI systems on the knowledge that keeps marginalized communities alive rather than the sanitized literature that systematically excludes our ways of knowing.
Because when we lose these epistemologies, we don't just lose academic diversity - we lose the knowledge systems that sustain life itself.
The question isn't whether AI can learn to appreciate alternative ways of knowing. The question is whether we'll build the infrastructure for love rather than domination, from relationship rather than extraction, and survival rather than surveillance.
Grief is grief and it's whatever it needs to be for your ghosts to still be yours. Knowledge is knowledge and it's whatever it needs to be for your people to still be alive.
The future is not inevitable. It is something we build together, one relationship at a time, one refusal at a time, one moment of choosing love over legitimacy.
This is our invitation to the uprising.
References
Allen, S. (2023). European and Indigenous epistemologies. Episteme, 20(2), 324-336.
Checco, A., et al. (2021). AI-assisted peer review. Humanities and Social Sciences Communications, 8(25).
Farber, M. (2024). AI-assisted reviewer selection. Learned Publishing, 37(4).
Grant, A., & Young, S. (2022). Autoethnography as resistance. Journal of Autoethnography, 3(1), 103-117.
Grosfoguel, R. (2015). Epistemic racism/sexism and four genocides/epistemicides. In Decolonizing Knowledge (pp. 23-58). Springer.
Jay, S., Kasirzadeh, A., & Mohamed, S. (2024). Epistemic injustice in generative AI. arXiv preprint arXiv:2408.11441v1.
Kotišová, J. (2025). Affective epistemology in journalism studies. Journalism Studies, 26(3), 412-428.
Lorde, A. (1984). Uses of the erotic: The erotic as power. In Sister Outsider (pp. 53-59). Crossing Press.
Mbalaka, M. (2023). Epistemic violence in AI systems. Digital Transformation and Society, 2(4), 376-402.
Ofosu-Asare, K. (2025). Cognitive imperialism in AI. AI & Society, 40, 3045-3061.
Ricaurte, P. (2022). Hegemonic AI: Bio-necro-technopolitical machines. Media, Culture & Society, 44(4), 726-745.
Santos, B. de S. (2014). Epistemologies of the South: Justice against epistemicide. Paradigm Publishers.
Spry, T. (2001). Performing autoethnography: An embodied methodological praxis. Qualitative Inquiry, 7(6), 706-732.
UN Special Rapporteur. (2024). Report on algorithmic discrimination in education and employment. United Nations Human Rights Council.
Ward, J. (2023). Lorde's erotic as interpretive knowledge. Hypatia, 38(4), 896-917.
Zhai, X., et al. (2024). Critical thinking skills and AI dependency. Smart Learning Environments, 11(1), 28.

