Interview with Matthew Biederman: Parsing the Threshold of Perception

Interview with Matthew Biederman: Parsing the Threshold of Perception

Matthew Biederman is a Montreal-based artist whose work explores light, colour, and the finer details of the electromagnetic spectrum—and related thresholds of perception. Drawing on the momentum from a series of projects reflecting a recent residency at a nanotechnology lab in Portugal, Biederman has just released Micro-Macro on Sedition. Below, he delves into many of the reoccurring interests that he explores through his works, and situates the last several years of his practice.

Biederman’s works have been exhibited extensively in the US, South America, Europe and Japan, in festivals and venues including 7 ATA Festival Internacional (Lima), 2014 Montreal Bienniale (Musee des Arts Contemporain), Bienniale of Digital Art (BIAN, Montreal), Artissima (Turin, IT), Moscow Biennale, Art and Alternative Visions (Tokyo), and Sonic Acts (Amsterdam), among others.

“All The Way Down” at gnration—Braga, Portugal (2018)

Micro-Macro is based on Morphogerador (2018), an ‘infinite duration’ generative video that emulates the structural colour we see in various species’ colouring across nature, through a slow zoom into nested fractal-like patterns. Could you describe the residency at the Iberian Nanotechnology Laboratory (INL) this project is an outgrowth of, and why you chose to work with reaction diffusion algorithms to generate this form?

Matthew Biederman: I received an invitation to participate in the residency by Luis Fernandes, programming director of gnration and curator of the Semibreve festival (and a fantastic musician). As I understand it, Luis was approached by the folks at the INL to help to coordinate an artistic residency program within the lab, and to date they have had a number of notable artists there including Pierce Warnecke, Ryoichi Kurokawa, Antye Greie (aka AGF), and others. It was an incredibly open and flexible situation that began with a couple of initial online meetings, after which I selected a research group who work I was most intrigued by. I ended up working with the Nanophotonics group for about ten days on site in their labs where I got a chance to see how they work, and to try and get a grasp of what they doing there, and gain a bit of understanding about what the INL is and what it aims to be. It was a fantastic experience—the researchers and the staff were totally open to having an artist around and asking questions. I was given total and complete access to the building and the researchers there and never felt as if I was intruding. I went to the roof, and explored the building from top to bottom. One of my favourite spaces was the machine room, where I spent a few hours making the audio recording that accompanies the work here on Sedition.

One of the things that I learned there, that the Nanophotonics group are working quite closely on, is the natural phenomenon of ‘structural colour’ which is a way that nature produces colour without using pigment, by creating tiny structures that refract different wavelengths of light in different directions which in turn, create what we perceive as iridescence. So that when a surface with this structural colour shifts, or your orientation to it shifts, the wavelengths of light you see shift as well. For someone who has always been interested in perception and the body, this was a revelation for me. You find this most commonly in Morpho butterfly wings, or peacock feathers, but it occurs in many other instances as well, some of the researchers there are trying to find where and why this happens in order to potentially harness these properties in, for instance, solar cells in order to make them more efficient. Coming from my background and interest in colour and perception, it struck me as a very interesting avenue to explore, and really each one of the works I made during the residency explores this concept in one way or another. Taking Morphogerador as an example, it really began as an experiment to try and recreate—as much as possible—an image that has similar visual properties. Knowing full well, of course, that it would be impossible—but I wanted to give it a try and see where it took me. The use of a reaction-diffusion algorithm in this instance made perfect conceptual sense to me since essentially these algorithms can be used and have been hypothesized by Alan Turing that it is possible simulate any natural patterning system given specific values, weights, and time.

So for me there was a very interesting synergy between the idea to use an algorithm that simulated natural phenomenon to try and represent a natural phenomenon. As I worked on the piece, it also began to take on these larger issues of sciences continual desire to look ever closer at our world, or ever further into space—and so far, we haven’t yet reached the limit of how far we can ‘see’. At the INL there is a room sized microscope that images at the atomic scale, it’s amazing, it lives in a faraday cage and is several floors below the ground, built on solid rock, which was a requirement in selecting the location of the lab.

Interference, LED tube sculpture (2018)

Interference (2018), Perspection (2015), and Event Horizon (2012)—when I encounter your installations I always think of Op Art. I have to ask: to what degree do you feel an affinity, or lack thereof, with artists like Bridget Riley or Victor Vasarely?

MB: Hmm. I think I like their work more than the term Op Art—which is, if I recall correctly, a gift from Donald Judd. I guess I find a connection to many of the artists who had their work exposed in “The Responsive Eye” exhibition at MOMA in 1965. You had Julio Le Park, for instance whose work directly address one’s position in terms of how it is perceived, there were others such as Robert Irwin, Josef Albers, Ad Rheinhardt, artists whose work reflected upon the idea that artmaking can be, or rather is a form of research, and in their cases, using colour and light itself as its medium; my work is very closely related to this. At the end of the day, any electronically mediated display is emitting light and I have always been sensitive to the idea that all these software, computers controllers, etc. are doing is controlling what colours are emitted from these displays over time, and the computer is a way to have very precise control of this grid of lights, either projected or from a screen. I find considering things this way opens me up to all sorts of freedoms when I don’t have to think about the screen as an image delivery device. In this way it would be impossible not to be related. Op Art as a title gets a bad rap partially because of this exhibition, and because of the name attached to it, hence the reasoning behind artists like Irwin for example trying to come up with their own names for what it is they are doing, he uses “conditional” for instance. “The Responsive Eye,” was so wildly popular, the images and the ideas were almost immediately co-opted by advertising and popular culture, further diluting some of the revolutionary ideas inherent in the works.

Perspection, multichannel interactive audiovisual installation (2015)

Since we’re drawing connections between Op Art and the current moment, do you think we might see the bleeding edge of digital aesthetics co-opted by modern advertising? I don’t mean affable CG lizards selling car insurance in television spots, but the colour palettes, compositions, and formal language of you and your peers (Carsten Nicolai and United Visual Artists spring to mind) co-opted by the commercial realm?

MB: By now, visual culture moves so fast, and it seems that the looks are copied and repeated so quickly and easily that I’m not sure it’s healthy to think about too much. The best artworks are a distillation of an idea into a realization where all the parts conceptually fit together to form a complete whole. When a look or style is aped to shill something it's obvious, and is seen and felt very differently; or at least I hope it’s seen and felt differently. Besides, by the time it is scooped up and re-presented by someone else, we’re already on to another work ourselves.

A Generative Advesarial Network, video & lightbox installation (2018)

Of your recent projects, I was quite struck by A Generative Adversarial Network. This could easily be read as a purely aesthetic experiment exploring figurative representation ‘after the algorithm,’ but given the dataset was developed by the TSA there is a surveillance and civil liberties subtext. Beyond serving as an aesthetic baseline, how did this training data shape the development and framing of this project?

MB: This work came about completely organically. I had been teaching myself some machine learning techniques, and doing some research on where and how these algorithms are being used and planned to be used—it seems everywhere you turn, AI is at least being proposed and discussed for some task. I happened to read an article in the New York Times that mentioned Kaggle,a website that was a clearing house for contests asking people to try and solve problems using machine learning. So I went and had a look and found that the TSA had a contest with a bounty attached to it, and I thought immediately that there was something interesting there and it pointed to a task I’m not so sure machines should be left to do. I wasn’t interested in solving the problem, but the fact that it was even there in the first place, on a site whose parent company is Google who maintains TensorFlow libraries and so on. So I signed up, and got a hold of the training data they created for the contest. Some of the machine learning algorithms I had been working with at that time were specifically Generative Adversarial Networks (GANs), and I always thought it was an interesting name for a fascinating process. I also end up travelling a lot, so I witness first hand the way that citizens are treated as we pass through these so-called secure zones and our rights within them are often on my mind.

Much of the artistic work around AI these days uses the images that GANs produce on a purely aesthetic level, understandably—they produce really beautiful striking imagery. However, working within this field right now, I am interested in the use and manipulation of the system—what Machine Learning is and how it can be utilized within specific contexts as a means of additional control and surveillance. I was looking for a way to combine the beauty of watching a machine try and recreate an image without copying it as GANs do, but also to comment on the how these systems operate and potentially could manipulate through imagery, we truly could come to the point of being unable to trust images.

Many citizens of the world are confronted by technological systems and treated as adversaries in many contexts—this approach is propagated by those who desire power or endeavour to remain in control. I’m thinking here of the immigrant situation worldwide for instance, or the fact that in the US kids and adults are being killed by police officers, and the federal government brazenly manipulates video footage in order to try and reinforce its own biases. So I am trying to ask the questions, or look a little way into the future to consider how these GANs, these ‘adversarial networks’ might be used to prove us an adversary at any point and how these algorithms could be weaponized. This training data was composed, but I don’t believe that everyone they scan is anonymized—it may be today, but that might not be the case tomorrow. It’s a revelation to see just how much is seen, period, by exposing the scans themselves—the body looks as if it is on display for inspection on a lazy susan. The whole point of the contest is to remove a human person reading these scans and give it over to an algorithm that will push a person to be further processed. And if you consider the fact that the network, or AI might eventually be the judge and jury, well, it could be a very scary time.

It's funny too, since what people seem to be most interested in is the real millimeter wave scans and what can be seen—the ad in the back of comic books for X-ray specs was true. Personally, I always opt out, and go for the frisk, after all that’s radiation—which brings us back to the electromagnetic spectrum.

Plenty of Room (for Feynman), nano-sculpture (2018)

Another project that emerged from your time at INL was Plenty of Room (for Feynman), a nano-scale sculpture inspired by theoretical physicist Richard Fenyman’s anticipation of atomic construction. For obvious reasons, this work is presented as a photographic print when exhibited, but I’d love to hear about the challenges of constructing this array of tiny hands.

MB: It’s not just the photo that is presented. The actual hands are presented as well, its just that you can't see them, but they are there on a glass slide. Basically they are a hyper precise version of the 3D printing we are all familiar with now. In this case however, the 3D printing is performed by a femtosecond pulsed laser (10-15 or one quadrillionth of a second) hitting a drop of liquid photoresist polymer that reacts to the laser. The process is called two photon polymerization, which is exactly what it sounds like. When two photons hit the polymer at the same time and the same place, the polymer reacts and crystallizes in that very small area sort of creating a physical vowel. In this process, since the laser has to be extremely well focused, it is the platform that moves incrementally as the pulsed laser hits the droplet. The rest is very similar to traditional 3D printing, you feed an .STL file into the hardware, and the platform knows where to move and when to pulse the laser, and when it's finished you simply rinse away the remaining photoresist. It took us quite a few tries to get it right, everything needs to be precisely calibrated and you have similar issues to deal with such as internal supports for the structure and so on since you are building out of a liquid. I think this final version took approximately eight hours to complete. The structure is kind of glass-like, so it's actually only visible to an optical microscope which has its limits so you don’t see much detail unless you use a Scanning Electron Microscope (SEM) to really examine the structures. Prior to getting the SEM image however, we had to first coat the sculpture in a fine layer of gold so the electrons could actually bounce off the surface and create the image. Luckily the INL has the equipment and the expertise! They have a very advanced sputtering system there that can deposit all sorts of materials, as you might need for stereolithography and building silicon chips and so forth, so we were able to use that in order to do the imaging.

The hands actually weren’t the first object we tried to build however, the first on was a set of structures that would interact with light. My idea was to create a synthetic structure for the creation of a phototonic crystal that would imitate structural colour. We tried a few times, but at the time, the setup wasn’t quite capable of building such a small structures with the resolution needed, our attempts were only interacting at the gamma wavelength, and unfortunately we can’t see those waves. Since I've been there however they have upgraded the laser and I understand that it might be capable now of doing that. It's how I ended up with Interference as this huge piece on the floor, its structure mimics the structure of a synthetic photonic crystal that has already been produced. That structure is then covered in dichroic film (which in a way is an imitation of structural colours) and I then have programmed a set of LED strips to generate an animation through it which is a simulation of wave interaction.

API Sonic Tent (2009-)

Beyond your algorithm wrangling and geometric explorations, the Arctic and the radio spectrum are frequently explored within your works. How have these enduring interests shaped your practice?

MB: Well, I have always seen my practice as being a little bit holistic to me, but maybe are seen as a rather scattershot practice, whereas in fact it’s the art world that I find rather siloed in their interests and exhibition strategies. I came of age as an artist, specifically a ‘media’ artist, in the ’90s when there was a strong movement of media artists also working as activists, and video, radio and eventually the internet was always understood as a way to empower communities. We believe that by having access to the technologies of mass media, and having the means of distribution, even on a punk rock DIY scale, there were important stories, poetry, art, or whatever you can imagine to share through these technologically mediated channels. This ethos is how I found myself working in the Arctic as a founder of Arctic Perspective Initiative (API). It's this desire to share access to a set of tools, or in this case have the opportunity to try and imagine and and then build a set of tools together within a community that would be used as a means of empowerment rather than a means of consumption or co-option. By building and creating tools and media and all that goes with it within a specific community, it can reflect itself and grow rather than be consumed by a dominant culture, through consuming their products and being fed their desires. Where we began working, in Igloolik, Nunavut was not an accident since this singular community voted to keep television out of their hamlet until there was a television station that actually told local stories and was made by the local people. They understood that if you don’t see people who look like you and act like you, you’re invisible, and eventually might be really invisible. Of course this system didn’t exist at the time, but the people that kept TV out, some of them went on to create the Inuit Broadcasting Corporation—whose earliest employees were Zacharius Kunuk and Pauloosie Qulitalik, who then went on to form Isuma productions in Igloolik (and whose work will represent Canada at the next Venice Biennale). They understood from the very start what it means to have access to these tools of production and distribution. To have the power to broadcast, whether radio or television, or data for that matter is power, and they understood that, and it’s what has always excited me, these invisible zones of power and control.

From a conceptual standpoint however, radio and television broadcasts are different wavelengths of the same continuum as visible light—just different points on the electromagnetic spectrum. In fact, I have always considered television and radio as prosthetic devices that re-tune wavelengths that we can’t normally see into a zone that we can, or in the case of radio, retune them to physical vibrations. Again, it’s related to the same phenomena that all the works from the INL series try to tackle: scale. I guess that is why they call the residency Scale Travels!



Mentioned artists
Matthew Biederman
Matthew Biederman
Followers 245
Artworks 5

24h Private View Is Live

New collection Skinterface by Jonathan Armour is available.

Time Remaining