The original idea was to combination of a "smart mirror" ( or a screen with a semi-transparent mirror on top) and an "infinity mirror" ( a reflective tunnel created by separated mirrors ). First, I built a proof-of-concept to make sure it was possible. To do so, I needed:
To make the piece interactive, I would later add:
For the glass, I used inexpensive samples, which limited the size of the build, but allowed me to use high-quality mirrors without the high price.
After everything arrived, I connected all the peripherals. The LED strip needs PWM for control, so I had to disable the audio module on the Pi (more info here). Afterwards, I used the rpi_ws281x library to control the LEDs with Python.
Next, I began prototyping. I built a janky housing out of cardboard, and tested different mirror combinations. After some trial and error, I concluded that using all three mirror pieces, from thinnest to thickest, provided the greatest perceived depth (i.e it reflected the longest trail). Next, I tested different LED placements, and found that they need to be right next to the mirror to create the "trail" effect I wanted. If they were recessed into the edge, it simply flooded the area with color, which was not as impressive. I tested the prototype with a hour-long kaleidoscope I found on Youtube, since it is similar to what I plan on coding for the display.
Next, I designed and built a case to house all of the pieces. I drew up a diagram of how I imagined the case would look, then acquired all the materials I would need to bring it to fruition.
I ended up using a 2' x 4' x 1/2" sheet of birch, which was thick enough to provide structure and hold screws, but not so thick that it would cover the glass. I also bought some mahogany-colored wood stain / polyurethane mix, which allowed me to stain and coat the wood at once, which was nice because I had to work quickly to finish it all over the weekend. My dad already had plenty of leftover angled steel, but it was very rusty, which added some extra work. He also had plenty of miscellaneous screws and brackets, so I didn't need to buy those either.
The first step of the outer case was the metal rim. I knew this part would be difficult, but I wanted to do it for a few reasons: it would protect the fragile inner pieces, it would look nice, and I wanted to learn to weld. I began by cutting four pieces of (nearly) identical 12" pieces out of a large strip of angled steel. I cut each corner to be 45 degrees, and then used an angle grinder to get them to all fit nicely. This ended up being a long process of assembling pieces, testing the fit with the glass, then grinding, then assembling, over and over until it all fit.
Then, it was time to start welding. I practiced a few welds on scrap pieces, then starting connecting the corners. First, I clamped the pieces I was welding down, so they wouldn't move mid-weld. Then, I connected them with the welder. Afterwards, I used the angle grinder to flatten the weld, which left me with a solid piece. However, the first few times, I would deposit a bunch of metal, and then when I grinded it down, the pieces would separate. This kept happening because I was moving the torch too fast, and the pieces weren't melting enough to really connect. Once I figured this out, I produced much stronger results. I also ran into some problems on the last corner, because my pieces weren't precise, so the last two corners didn't connect. In order to solve this, I had to bend them together by putting it in a vice and hitting it with a rubber mallet. Then, I (strenuously) held them in place while I clamped them. This worked, but it made the piece slightly wonky, as it was no longer a perfect square.
After the pieces were connected, I went over the whole thing with the angle grinder a few times to flatten the corners and remove the rust. After that, I went back over it with a sanding wheel to give it a final polish, and was left with a beautiful piece of metal. Originally, I planned on building a full rectangular frame out of metal, but the front piece took ~5-6 hours, and I didn't have time to build the rest.
Next, I started building the inner case that houses the two 8" mirrors. I used a router to create an indent on the inner edge of both sides. I wanted 1.5" between the inner mirrors, so I measured out a strip of wood that was (2 * mirror width) + 1.5" wide, and cut it from the main piece with a circular saw. Then, I used the router to cut the indents. Finally, I used a chop saw to cut two long pieces and two short pieces. Then, I drilled guide holes and put it together with screws.
Next, I built the outer case. I measured it based on the dimensions of the outer glass, cut the strips I needed, then assembled them (drilling guide holes and securing with screws). I was trying to finish by the end of the day, so I immediately started cutting and testing the offsets needed to hold the inner case. However, when I tried a test fit of the metal frame, I found that the outer box was ~1/2" too small on one side, which meant it wouldn't fit nicely once it was all done. Begrudgingly, I rebuilt the whole thing, measuring based on the frame this time. I was worried, since I was running out of time and wood, and I still needed a large piece to hold the screen. However, the second try ended up fitting really well, and I'm glad I didn't try to make the other one work.
The final piece was the screen holder. This piece was probably the easiest. I cut out a square that fit within the outer case and outlined the screen in the center. Then, I used a jigsaw to cut out a (nearly) perfect hole to fit the screen. Next, I placed it on top of the offsets of the inner case, and held it in place with 4 screws. At this point, it was Sunday night, and I needed to drive back to school. So I put everything together, cut a few extra offsets, and headed back.
When I got home, I assembled all the peripherals within the case. First, I cut a slot in the inner case with a dremel for the LED connection. Then, I fed the strip through and superglued it in place. Next, I cut the extra LEDs off, and soldered the control wires onto the strip. Then, I dremeled some holes into a spare piece of wood for a camera mount, such that the camera would sit flush. I secured it onto the wood with double-sided foam tape, and attached that to the top of the inner case with an L-bracket. Finally, I attached the IR lamp to the top of the outer case with two leftover screws.
Then, I coated the metal frame, since I live in a humid climate.I used a rust-preventing clear coat, and applied 6-8 light coats over the course of 3 hours. This sealed everything nicely and gave it a smooth texture.
Next, I used the leftover stain/poly to coat the exposed edges of the outer case, and also filled in some of the unused/misplaced screw holes. At some point I want to get screw covers, but I didn't have time this week. Then, I applied weather-seal foam tape around the edge of the metal frame to protect the mirror. This provided the extra benefit of making the frame sit more snugly against the case, which meant I didn't have to glue it in place.
The last step of the interior was covering it with blackout fabric to hide the internals. Originally, I wasn't planning to do this, but there were a couple visible pieces, and I wanted it to look seamless. I measured and cut a hole in the center that was the size of the inner glass, then I superglued it to the inner case. Next, I made sure all my wiring was correct and the inner mirrors were level, before using screws to attach the blackout fabric to the inside of the outer case. I used screws so that I can still access the interior if necessary, but I haven't had any problems with the hardware so far (luckily).
One of the most difficult parts of the build was leveling the outer mirror. If the three mirrors weren't level, the reflection trail would curve and become very disorganized. I tested the level-ness of the mirrors by placing my phone underneath with the flash on, then making slight adjustments until the trail lined up. I'm sure there's a better way to do this, but I held the mirror at the correct angle by stacking and gluing pieces of cardboard / paper around the rim of the case. Once it was level, I glued the outer mirror to the case, in the hopes that it won't shift in the future. After about a week, it's still almost perfectly level, which is reassuring.
At this point, everything was assembled, and I was ready to move back to the software side of things. However, I had three separate power cords, and I realized this would prevent me from using it with normal outlets. So, I found an extension cord with three inputs, and plugged everything into this. Then, I (roughly) routed the power cables and taped them down. Now, everything is flush with the back, and I only have to plug in one cord to get power to all of the components. This was a major step for me, since I want it to be as streamlined as possible.
Overall, I am very happy with how the case turned out. It is incredibly solid, looks nice, and was a great learning experience. It is also shockingly similar to the initial sketch of how I wanted it to turn out. I really enjoyed the whole build, and I will definitely be trying out more work like this in the future. The metal work was very gratifying, since the tools can manipulate it like butter, but you end up with a nearly-indestructable object. It was a nice respite from endless coding, and I'm glad I was able to incorporate both types of work in this project.
Initially, I set up a Processing program to display the camera input on screen using the GLVideo library. This takes advantage of OpenGL hardware acceleration to display video. I was surprised how fast it was able to process and display on such a small machine, since I didn't think it would be powerful enough to process camera input.
After some testing, I discovered that the pure-IR output was very creepy, since it didn't really pick up hair, and made dark circles in the eyes. I needed to process the input into something a little more visually appealing. So, I decided to go with an edge-detector, since it would only display the most crucial visual information. Therefore, the majority of the image is black, which makes the reflection trail much more visible. The built-in edge detector in Processing was very resource-heavy, and I was getting very slow frame rates, which destroyed the overall immersiveness.
Eventually, I realized that using a GLSL shader would be much faster, since it can utilize the same hardware-acceleration as the video library. I seached around, and found a well-written edge-detection shader written for Processing. When I filtered with this shader, I got the minimalized output I was looking for, and it didn't introduce any noticeable lag.
The next problem I ran into was visibility. The camera (with its built-in IR lights) was accurate and detailed when the subject was ~6 inches from the mirror, but any farther than that and it would disappear. Furthermore, I wanted to put the camera behind the outer mirror, so that it would be fairly hidden to the viewer. In testing, the IR lights had difficulty penetrating the mirror, and the output was very minimal. The first solution I tried was to remove the IR lights, solder some "extension wires", and put the lights outside the case and the camera within. This kinda worked, but the subject still had to be very close to the camera to show anything. So, I started looking for a stronger IR lamp/spotlight to illuminate the subject, which would give the camera a lot more data to work with. It turns out that these are commonly used in conjunction with security cameras, and I was able to find a fairly cheap 8-LED lamp on Amazon. Once this arrived, I removed the built-in IR lights and tried running my edge-detection sketch once again. The result was a massive increase in detail, whether the (normal) lights were on or off.
Once that the hardware was fully assembled, I dived back into the software. The frame rate of the camera was jarring, so I started playing with parameters. Eventually, I found that I could reduce the resolution of the camera input, and the edge-detector would smooth it out so that it wasn't pixelated at all. This sped the camera up noticeably, and it feels much more natural now.
Next, I added some keyboard controls to pause the camera, and enable/disable the shaders. I also added a few more shaders, but stacking them introduces major lag, and none of them have been particularly interesting on their own. I still have some work to do to polish the camera feed, which I may be able to wrap up tomorrow morning.
One of the mistakes made was prematurely gluing the mirrors on, since I was forced to either disconnect all the components to code, or code / test with the reflection trail. It turns out, it is surprisingly hard to read small text through a reflected display...
While testing, I used the LED strip example code that was included in the library. It was a rotating rainbow, which was helpful to test functionality, but fairly ugly in practice. So, I decided to write a few of my own methods to create a more aesthetically pleasing output.
First, I sketched out the layout of the LEDs in the case, and found that I had 77 "pixels" in total. Three sides had 19, one had 18, and there was one LED outside the case that wasn't visible. I planned out which array index ranges contained each side, then found the index of the center of each side. Next, I wrote a method that would push colors from the corners to the center of each side. I played around with the parameters, and settled on a range of color that was less "rainbow-y" than before. It still cycles through all the colors, but it has a fairly unified color scheme at any given moment.
Writing this method was especially time-consuming because I haven't coded in Python since high-school, and it took a while to remember how lax it is. I just wanted to declare all of my types and such, like I'm accustomed to, and it caused various syntax problems.
Eventually, I decided that I wanted to simulate a sort of "world" within the LEDs. So I turned to Conway's Game of Life. I attempted a simple binary one-dimensional version, but this ended up being fairly uneventful and spastic.
So, I decided to take it a step further, and implement color. I liked this idea because instead of a simple alive/dead balance, there would be different groups of color moving around and fighting to survive. It felt like a globalized version, as if each color is a different culture or nation, and it would more accurately represent the chaotic world we live in. I also decided to implement probability in the actions taken by each pixel, to add another level of chaos and variety.
However, this ended up being much more complex, since I had to compare colors and make a much larger set of rules. I used the same 5-pixel neighborhood logic, and I laid out five different categories for the groups:
Nation - when all 5 are the same color
95% : the center stays the same color, and varies slightly in brightness
5% : the center "mutates" and takes a random jump in one direction, limited by the approximate
size of the color (e.g. blue could only jump to blue-green)
These are the brightest groups, and like a nation, they are very unified. However, there is still the
possibility of "dissent" and mutation
City-State - when the two closest neighbors are the same color, but farther neighbors are not
90% : it stays the same color and varies slightly in brightness
10% : mutates
Slightly dimmer than nations, and a slightly higher chance of mutation due to weak control over the
Inland - 4 same-color pixels in a row, with a different color on one side
75% : stays same color
15% : becomes the average hue of the neighborhood
10% : mutate
Dimmer than nations, these are safely inside the "borders" of a color group, but there is still a
chance of influence by the differing neighbor (average) or outright mutation
Edge - 2-3 same-color pixels in a row on one side
60% : stays same color / takes on dominant color
35% : becomes neighborhood average
5% : mutate
Dimmer than inland pixels, these have a high chance of influence from differing neighbors, and as
such, are less likely to take random mutation
Border - Each side has 2 same-color pixels, but the center is different (e.g it is sandwiched between two groups of color)
50% : becomes average of both sides
25% : becomes side 1
25% : becomes side 2
These pixels are being fought over by groups on either side, and the fight for dominance
eliminates the chance of random mutation or "independent ideas"
Anarchy - every color is different
25% : becomes neighborhood average
35% : becomes average of inner 3 pixels
40% : mutate
These are the dimmest pixels, they either try to unite the neighborhood through average, or
mutate and try to "build a new nation"
This system performed pretty well, but it was very "jumpy". So, I added another loop that would interpolate between the current value and new value after each update. This made the changes happen much more smoothly. Overall, I think it is a nicely ordered chaos. It is much more fun with the meanings I attached, and while testing I found myself rooting for rebels that sprung up in the center of large nations, and betting on wars between large groups. However, there is a bug somewhere that makes it primarily use red-purple-blue colors, which I haven't been able to track down
To wrap things up, I wrote a BASH script that executes when the Pi starts up. This script starts the Processing sketch that processes and displays the camera input and the Python program that runs the LEDs. This means that all I have to do to start it is plug it in, and there is no input required to get everything running. Once again, this was a crucial step in making the piece as seamless as possible.
I wanted to give the piece a "message" of some sort, and after some research, I really liked the ideas of Jacques Lacan regarding the self. According to Lacan, the moment we recognize ourselves in the mirror (as young children) marks the birth of the ego. We see a unified body which contrasts our disconnected and uncoordinated physical senses. This begins a life-long conflict between an idealized self-image and a flawed reality. It also marks a sort of self-alienation, where we project our identity onto an external image of ourselves, i.e. "that person over there is me". This does not have to be a literal reflection, but anyone we identify with, whether it is friends, celebrities, etc. As a result, our personal identity is comprised of external bodies, and therefore, it is a construct of the imagination rather than our authentic self, which we are typically oblivious to. Furthermore, we are driven by a need to exude this identity to and receive confirmation from others. For example, if you tell your friend, "I'm so tired after this weekend", is it because that is information that you think will be beneficial for them? Or is it stemming from a desire to propogate a self-image or cast yourself in a light that you find desirable?
In a crumbling and chaotic universe, we look towards mirrors to secure the self, "no, I'm still here, I can see myself". If you can view yourself, there can be no denying your place in the universe. Similarly, the ego is an inauthentic agency serving to reassure us and conceal a lack of unity. Beyond the reflection, our identity is also composed of signifiers: words and details about our lives that we have internalized. The things that your parents always told you, the descriptions other people have given you, are what binds us to our self-image, i.e. the relation to the image is structured by language.
After reading various sources and taking notes on this topic for a few days, I began to develop my own ideas about how it relates to our current society. I began to see our obsession with social media / virtual communication as a sort of retreat into the ego. On the internet, you are not bound to your physical or authentic self, and as a result your ego has no limits. Your profile is an embodiment of your idealized self-image, and since those who see it may not know the "real you", there is nothing to prevent this image being taken as authentic by those you interact with. Therefore, you can receive a constant stream of the validation of this image, which further disconnects you from your authentic self, further into the idea of who you are, without any grounding reality of face-to-face interaction.
By that notion, all virtual communication with strangers is identity-to-identity rather than face-to-face. You are presenting an inauthentic self-image (even if you believe it to be authentic) and those you communicate with are presenting an image as well. As such, it is easy to lose sight of the real person on the other side of things, since there are no physical cues or reminders. This promotes a loss of empathy, since it is difficult, if not impossible, to put yourself in the shoes of someone with who you have no physical connection or relation. Furthermore, real spontaneous conversation forces the emergence of the subconscious, or authentic self, since you are put on the spot. In contrast, on the internet you have the time and freedom to "craft" interactions and moderate them with respect to your self-image. The question ""what would I say in response to this?" is inherently inauthentic and reflective of the ego, since if you were truly responding, it wouldn't necessarily require deliberation. I can see this attitude spilling over into real interactions as well, since we become accustomed to presenting our identity rather than authentically interacting.
We develop a sort of "hyper self-focus" where we are acting out the life of the "person in the mirror" and trying our hardest to present that reflection in the best light. Since our only focus is on the self, there is no room for empathy, and we are disconnected from the world around us. Instead of engaging with our surroundings, we are preoccupied with managing and curating the life of "that person who is me". Research on the topic presents similar findings, there is a distinct decline in empathy and increase in narcissism over the last ~30 years. They attribute this to a variety of causes, such as the "self-esteem" parenting method, intense pressure to achieve (which makes everyone around us a competitor and obstacle to our success), or a decline in free, unsupervised play between groups of children.
This research significantly changed the way I view life and self, and I am still in the early stages of developing my understanding of it. Therefore, when it came to integrating it in my project, I was at a loss for a while. Eventually, I did my best to boil it down to one sentence, and redesign the code with this "thesis" in mind:
Our reflection is an idealized, inauthentic construct that serves to reassure us and make sense of our place in a crumbling and chaotic universe.
I thought the camera output was an accurate representation of the "idealized, inauthentic construct" because the edge detection only shows the most prevalent identifying characteristics, while leaving the rest hollow, to be filled in by our imagination. Also, it is very smoothed and essentialized, which covers up the flaws we perceive in our physical bodies. Furthermore, the trail of reflection served as a sort of indication of the future, and the understanding that we are in flux, where "now" is just one frame among millions in the timeline of our lives. This tied back to Lacan's idea that our reflection is "the future perfect of what I will have been for I am in the process of becoming".
To represent the crumbling and chaotic universe, I was drawn to Conway's Game of Life, since it is ruthlessly probabilistic, with incredible beauty and chaos to be found within the simplicity. However, since my goal was chaos, my version was much more complicated, with more random chance, conflict, etc. By surrounding the rim of the mirror, it firmly places us in the center of this chaos, while reassuring us that we are not a part of it. For our image is perfect, and the simplicity of those around us could never compare (or so we think).