ƒ(x,y)


Smartphones, robots, and other digital devices play increasingly large roles in our daily lives. In coming decades, computer science is set to make significant advances, drawing on artificial intelligence and machine learning. As this happens, Alexa, Roomba, and Siri could begin to develop deeper cognition, even feelings of their own.

This project explores human and non-human visual perception through the medium of photography. At present, humans look at pictures and perceive shapes and colors, with complex interpretations and emotional reactions. In contrast, robots simply extract digital information, such as RGB values, pixel-by-pixel. But they are beginning to be able to recognize objects, and further convergence is inevitable. In this project, by juxtaposing traditional elements of perception (line, shape, form, color, value, and so on) with digital elements of perception (for example, matrices of RGB values), I explore alternative modes of visual processing.































Mark