Lying on deep surfaces.

 

Context Protocol began in 2022 as a investigation in the early stages of generative image-making. The project finished in 2025, takes the form of a triptych trilogy: three moving-image-and-sound prototypes, each composed of three internal parts. Each part addresses contemporary questions through a veiled, eerie, and unearthly register, producing a hermetic atmosphere driven by a slow, hypnotic tempo.

On one hand, the trilogy functions as an archaeological record, a deliberate documentation of the early days of generative image practice, grounded in the material traces of models and workflows that shaped that moment (Stable Diffusion 1.5, Stable Diffusion XL, and AnimateDiff). On the other hand, it interleaves this record with layered subjective constructions—text, moving-image and audio assemblages that treat the machine as both a tool and a interlocutor.

The project is guided by a practical, non-commercial stance, generation takes place offline and locally rather than via hosted platforms. Early experiments and model choices are part of the work’s record, community-trained variants and weights have been explored and examined for their particular behaviors and limits (for example: WrongTurnXL_ WTSDXL7.7.safetensors, LightRaysEnd_contrast.3.7.safetensors, RealisticReal_ V5.1VAE.safetensors). Textual revision and collaborative editing were carried out with distilled uncensored models (DeepSeek  R1 Distill Qwen 32B  abliterated) used as a critical aid rather than an endpoint.

 

 

Lying on deep surfaces is not lying

The ambition and risk of Context Protocol lie in the convergence of two forces: intuitive, affective expression; and careful, critical analysis. The project treats deep learning models not merely as generative engines but as cultural objects whose architectures and datasets encode histories, absences, and biases. This dual perspective makes the work as much a poetics of materiality as an ethical inquiry.

Methodologically, Context Protocol adopts what might be called an intuitive reverse engineering: a regressive-forward trajectory that refuses the myth of linear technological progress. Beyond the celebration of novelty, the project probes the inertias—formal, cultural, and archival—that neural networks reproduce, and seeks to disturb them through deliberate error, layered reinterpretation, and constrained practice.

A central conceptual strand is the idea of a human deep learning: a metaphorical appropriation of the structural affordances of neural networks (multi-layered attention to pattern and correlation) redirected toward human practices of memory, semantic ambiguity, and subjective meaning-making. Machine learning excels at correlational prediction, a human deep learning privileges what datasets omit, cultural weight, memory fractures, and the contingency of experience, while borrowing the disciplined layering that makes machines effective.

The work is structured in three distinct moving-image modules: First Triptych , Fifth Triptych, and Seventh Triptych. The trilogy functions as a prototype of this hybrid poetics, each module visualizes formal and symbolic outcomes while excavating the hidden logics of its source models—the tension between randomness and control, the syntax of dataset memory, and the slippery boundary of authorship in a mediated process.