December 8, 2014 | In: Blue Rabbit

Iteration #5 Yes Naturally


Material use: 2 grams

Print time: about 10 minutes, standard settings

x = 20mm; y = 20mm; z = 24.43mm

Throughout my research and experiments I have had a great deal of learning experience. In the process of capturing a few different objects, I slowly became less optimistic about the fate of my 3-D scan platform. It now seems that it is too difficult to capture the necessary detail of most objects for them to have any practical use. Objects such as boxes or anything with a mainly flat, matte surface can be scanned fairly precisely. However, objects such as the little toy soldier or glasses frames have too much empty space and/or gloss for an accurate model to be made. With these restrictions on what could accurately be scanned with the hardware I have , how could anything practical even be made?

In addition to the rough, distorted models that I scanned, their conversion process into a printable format also proved to be harder than I originally anticipated. Every detail that was assembled imperfectly in the 3-D model looked incrementally worse with each conversion step until finally stripped of its native color and sent to the print bed. For the purpose that I originally intended, my project had hit a brick wall. Not only were the models mangled nearly beyond recognition, but the polygonal intricacies make for a very slow print. Basically, my idea of a quick, easy scan followed by a painless touch-up and print has been proven to be a pipe dream, at least with a standard camera and 123D software. I did find the whole project to be a learning experience, and would definitely like to continue studying what is happening in the forefront of 3-D scanning technology. Now knowing that we have access to multiple XBOX Kinect units, I would like to either direct my efforts towards that or the construction of a DAVID 3D Laser scanner, something I initially heard of in Fabricated.

What is happening in the forefront of the field of 3D Scanning, and what can be done with common technology? Throughout my previous iterations, I have been researching the history and methods of 3-D scanning various large and small scale objects and landscapes. With the focus of this iteration being images, I decided to start my own attempts at scanning in order to improve my technique and experiment with the mechanics of 3D photogrammetry. After I fully understand this, I will begin the assembly of my scanning platform that loosely resembles a couple existing models and is tuned to operate properly based on my own experiments with different backdrops.

A compact laser-based scanner platform

A compact laser-based scanner platform

Untitled2

3D Assembly of my shoe, with issues

For my first attempt at capturing a 3D object, I tried to map my shoe on a glossy, wood surface. I took about 15 total images from various heights and plugged them into the 1 2 3D Catch program.

To my dismay, the program was confused with the assembly of the images for a number of reasons. Lighting was a big issue with the light reflection changing off my shoe with every different image. Another issue I ran into was the reflectivity and pattern of the platform. This realization through research and experiments will change my platforms design in that I originally planned to use a plain white backdrop.

bphoto3

Advice given to tune up my photoshoot

My second capture went much smoother, after watching a tutorial about how to set up a proper photoshoot.

One of the photos I took of the box.

One of the photos I took of the box.

I placed my box in the line of fire and snapped away the recommended 50 photos. The background I chose proved to be very sufficient for the software to stitch together its 3D model.

3D assembly of images taken of the box

3D assembly of images taken of the box

The difference in output quality made it clear to me that the design of my platform must include a properly covered surface and proper lighting, which were both not considered at the time of my first iteration. With this knowledge I will be one step closer to building a user-friendly scan platform.

 

 

20141118_130051 20141118_130154

 

 

 

 

 

 

 

 

 

 

How to Shoot Your Photographs. Digital image. I.materialise. I.materialise, n.d. Web. 17 Nov. 2014. <http://i.materialise.com/blog/entry/how-to-make-a-3d-printed-object-from-a-photo-in-5-easy-steps>

Photon 3-D Scanner. Digital image. Ars Technica. Ars Technica, n.d. Web. 17 Nov. 2014.

November 10, 2014 | In: CST, Uncategorized

Week 6 CST Observations

“Is he a psycho? What the hell is his beef with me?”

“I think that he thinks that technology hasn’t lived up to its promise and that we should all be demanding better of our tech. So for him, that means anyone who actually likes technology is the enemy, the worst villain, undermining the case for bringing tech up to its true potential.”

 

The struggle between human and technology was quite apparent during the participant portion of this weeks COM exercises, when the TinkerCAD website was going through updates. I love that we always have a fall-back method to still make sure we learn something useful, be it more programming oriented(OpenSCAD) or visual(Adobe Illustrator). The most important development this week though was by far the formation of smaller groups with similar project scopes. I was also excited to see the Blue’s and Rabbit’s switch up Mondays and Tuesdays, because I was wondering what effect, if any, it would have on us. We are sort of our own experiments in this class which has always intrigued and confused me..

Throughout the research I have done in the field of 3-D scanning, I have noticed the struggle between large commercial scale scanning hardware pushing companies and the consumer driven attempts to eliminate the need for this seemingly redundant hardware. With the right technique and software interconnectivity, the average person could produce an accurate 3-D depiction of a real-life object with just a few images, claims software such as 1-2-3 D Catch. I have a growing belief through my mishaps and other research that this assemblage of software conversions could be a little more tricky, though. In order to tame this beast, one must first get to know it and all of it’s components. The scanning platform that I initially attempted to saddle up with was vastly inadequate for the task in which I set out, as I learned through trial and error. My original idea didn’t even consider depth of background, which is critical in mapping an object accurately, but this just intrigued me more. I want to know what to apply to my scanner in order for a person with no previous knowledge of the matter to be able to make use of this amazing technology without having to worry about any process beside scanning.

Fabio Remondino has been at the forefront of researching methods to accurately recover 3-D meshes from photographs, or photogrammetry,  as exposed to expensive scanner hardware. In his journal From Point cloud to Surface: The Modelling and Visualization Problem () from 2003, we can sort of see that his disposition also leans towards the cheaper, more accessible route to get to a 3-D model.He even goes on to state that the photographic mode has a higher measurement reliability in terms of photogrammetry, but lacks in detail due to less images.This intrigues me to know that not only is the method I’m pursuing more novel, it can even improve on mapped details of a more complex and costly scan. Another interesting culmination of cell phone hardware and existing topographical data is described in the work titled Mobile Photogrammetry by Armin, Gruen, and Devrim Acha, where they blend aerial data, GPS coordinates from a cell phone, and images taken by the cell phone to construct a 3-D map of an area.I am really excited to discuss this with my classmate, Forbes, as I hope that this could lead to a more convenient way to map Olympia. In the paper “Shape and Correspondence Problem” written by Abhijit S. Ogale and Yiannis Aliomonos, the battle between detail and accuracy is revealed. The introduction and list of previous works that are incorporated proclaims that there is an “energy output” threshold of compiled images, meaning that there is a give and take in terms of detail and accuracy. The article goes on to list the sub-types of the threshold causing calculations for compilations of images through this particular type of process. This set of congruent issues reminded me of problems that I encountered with my first phase of testing the capabilities of the 1,2,3,D Catch software, when I had no idea how the compilation process of multiple images worked. Because of this week’s research, I am glad to say that I can better understand the nature and history of research done in relation to 2-D to 3-D  digitalobject assemblies.

Based on the short history of 3-D scanning as described by myself and my cited experts, my thoughts and concepts have expanded to the point where my simple scanning platform has to take on new challenges. What shapes can I use best as topographical reference points for the compiling software to accurately assimilate depth in comparison to the target object? How can I best utilize lighting for gathering intricate details of an object? What angle do I fix these lights at? Of all the marketed scanning platforms currently available, why do none of them utilize the simple, powerful tools within a cell phone. Why do they push this costly idea involving lasers and expensive cameras? I realize that while these questions make their way to the forefront, they are only reiterations of the questions I had coming into this project, from a more educated point of view. Through reading of my chosen articles and journals, I learned the inner workings of how a scan is actually produced; methods such as range imaging (with and without laser supported systems), photogrammetrical assemblies, and LiDAR assisted 3D topographical constructions of large scale maps. Another bit of very pleasing information I discovered was that all this scanning (landscape, human, object) has a wide variety of applications that is constantly growing, as we discover new uses for old technologies .Another thing that the experts I have cited have piqued my interest about is: although expensive, new hardware exists to perform the tasks we pursue in this day and age, our older, or common hardware has not yet been fully utilized and re-evaluated, perhaps prematurely, considering the amount of time it has been around. This thought echoes through my brain with the mantra of our class. What new do we need to create in a world full of stuff we haven’t used to it’s full potential?

In my first iteration, I asked what could be useful to scan. Not yet backed by data, I was struggling to find a concise answer to this. After various scholarly insight, information, and experiments I can say that a scanner is not only hand in hand with a printer, but it can be used for facial recognition, preservation of artifacts, and even building upon our knowledge of trees and tree systems, large and small scale. I have also learned refining techniques for capturing and smoothing 3-D objects, as well as insight on the mechanics and mathematics of assembling that object from multiple 2-D images. I would say that this week’s research has allowed me to be knowledgeable enough to build my scanning platform, while having confidence that in can make a change in the clarity and easiness of capturing an item.

November 3, 2014 | In: CST, Uncategorized

Week 5 CST Observations

“The next morning Perry found himself desperately embroiled in ordering more goop for the 3-D printers. Lots more. The other rides had finally come online in the night after indeterminable network screw-ups and malfing robots and printers and scanners that wouldn’t cooperate” (Doctorow 201)

Perry and Lester are two people who are constantly solving problems like this only to be replaced by two more, and their struggle kind of reminds me of how we, even through our class’ hardware and software uses (e.g. projectors, printers, networks), are constantly developing and troubleshooting. This and the development of our Blue Rabbit projects have been great learning experiences to watch as well as participate in, as our concepts become complicated in their slow transition to the real world. I was really interested in this weeks class discussion about the possible ways to make a yurt skeleton with our subtractive tinkercad software, and hearing around four or five entirely different solutions that all work well.  Examples like this only strengthen my resolve that our group oriented classes are beneficial l;earning opportunities. My project developed this week in the sense that my required parts list for my scanner expanded. I find this reminiscent of the quote I took about Perry from this weeks readings.

October 27, 2014 | In: CST, Uncategorized

Week 4 CST Lab

“Well what are you going to do about it?”

“What am I going to do about it?”

“Sure, this is your thing, Perry…”

This week in our CST lab I got to witness so many projects shaping into their first stages of fruition, as well as a few that were well beyond the “idea” and already taking shape. The variety of learning methods I have seen throughout observations is refreshing, and I think that this creative freedom will allow for development of our ideas in a way that a controlled environment would never support. The excerpt I chose this week reminded me of the teachers’ observer role in this project, noting that although they are actively here to help us, it is our goal to create our own “thing”.

October 22, 2014 | In: Uncategorized

zotpress

house of leaves, yeah. [zotpressInText item= “{X2B4WJUM,35}”

For my project, I have decided to explore the realm of 3-D scanning items and turning that scan into a 3-D printable format. Throughout the upcoming weeks I will be making a platform capable of capturing precise 3-D images of an object with any smartphone, finding the most suitable program to convert these captures into printable .stl format, and testing the practicality of printing replicas by scanning and printing a house key. During the entire process, I will be researching and writing on the impact that making these technologies available to the public could have on society.

My idea first started as a simple way to print out a spare key. During my first attempt to make a key image using 123D Catch, I realized the process would require more tools, the most significant being a stable, adjustable arm to photograph the key from multiple vantage points without moving it. After this realization, I decided to widen the scope of my project and include the development of a platform to scan objects with, using the printer to make many of its components. The idea I came up with was inspired from a photo from a Google search titled “3D scanner platform”Rubicon but redesigned to work with the gyroscopic sensors used with the 123D Catch Android app. I also discussed my design with Michael, who had previous experience with making a scanner/platform with an XBOX Kinect unit, and he invited me to see that scanner in action. I hope this will shed some light on the next step in the process, which is that of converting the capture into a printable file. Although I haven’t worked with the program yet, I know that 123D Design offers an import/export and brush-up of files captured via phone. Finally, after conquering all of these hurdles, I hope to test the ability of my digitally scanned and converted key file by printing it up and testing it in a lock. But why spend so much time and effort on creating a replica of something that already exists?

I think that the urge to replicate comes from within our own bodies, constructed of DNA endlessly reproducing to scribe our stories within its helical pages. In terms of objects, replicas are seen as synonymous with fakes and forgeries, but can also be used to educate people and create an interactive learning environment when paired with 3-D scans (Roozenburg). Also, there exists a convenience factor of knowing that you have a digital copy of something, whether or not you need to manifest it at any given time. This idea somewhat nullifies the question of what to make in a world so full of stuff, because things could be produced only based on their need, and kept in a digital realm until then. In the case of a key in particular, having a digital copy stored somewhere safe could make you $50-100 richer, and a few hours younger.

After doing some research to see if anybody else had played with this concept, I came across an article in the Telegraph that wrote of two companies – Keys Duplicated and KeyMe – that offer paid services for key duplication. While I find this very similar, I would like to see the services offered to anyone, freely. Aside from that, I saw images and ideas for keys with customizable faces, adding a new element of fun to every lock/unlock session, woo-hoo!KeyMe

But in the deep, dark corners of searching the internet (actually the front page), I also found atrocities. From bump keys to testing the integrity and safety of the aforementioned KeyMe service, it seemed that the ill intentions were as plentiful as the good. Bump keys, formerly used by professional locksmiths, could be inserted into a lock, whacked a few times, and voila! Problem solved. In the digital realm, these keys can be found and 3-D printed easily, and in the hands of the wrong person they can be devastating (Sparkes). Andy Greenberg decided to test KeyMe’s claim that “only you(key owner)can scan your keys” by attempting to scan his neighbors key, on a keyring, in a 30 second time frame. Surprisingly, he had no problem taking this scanned image and reproducing his neighbor’s key through the KeyMe  service(Greenberg). The author and experimenter suggested that people just “keep it (the key) in their pants” because in this day and age, every bit of personal information is at risk.

I hope that over the next six weeks, I can add a new angle to the 3-D scan society by creating a platform that will allow any smart phone user to scan any object of their affection to keep with them forever in the digital realm, or use in a practical situation. In doing this, perhaps another case study can be conducted on what people find necessary to scan into the digital realm.

 

Works Cited
Greenberg, Andy. “The App I Used to Break Into My Neighbor’s Home | WIRED.” Wired.com. Conde Nast Digital, 23 July 0014. Web. 21 Oct. 2014.
Roozenburg, Maaike. “Smart Replicas: Bringing Heritage Back to Life.” Smart Replicas Nov 25 (2013): 28-31. Smart Replicas. Royal Academy of Art, The Hague. Web. 20 Oct. 2014.
Sparkes, Matthew. “Lost Your House Keys? Just 3D Print Another…” The Telegraph. Telegraph Media Group, 29 July 2014. Web. 21 Oct. 2014

October 20, 2014 | In: Blue Rabbit, CST

Week 3 CST observations

“‘What’s with the jungle-gym?’ It really had been something, fun and Martian-looking.

 

‘That’s the big one,’ Tjan said with a big grin. ‘Most people don’t even notice it, they think it’s daycare or something. Well, that’s how it started out, but then some of the sensor people started noodling with jungle-gym components that could tell how often they were played with. They started modding the gym every night, adding variations on the elements that saw the most action, removing the duds. Then the CAD people added an algorithm that would take the sensor data and generate random variations on the same basis. Finally, some of the robotics people got in on the act so the best of the computer-evolved designs could be instantiated automatically; now it’s a self-modifying jungle-gym'” (Doctorow 100)

 

Sometimes the best investigative technique is to just ask, as Suzanne does about the jungle-gym. During the CST lab observations, I took a similar approach to hear about classmates’ Blue Rabbit projects. Watching these groups of similar minded ideas form to collaborate reminded me of how the jungle-gym came to be, and I am excited to see what comes from it. I have finally decided to aim my project towards building a 3-D scan platform that will allow detailed 360 degree scans of small objects that can then easily be replicated. I want to take this a step further and try to duplicate a house key to test the real-life application of the objects we produce.