Make: Online


Open MAKE, THIS Saturday, March 19, 10am – 2pm, at the Exploratorium

It’s that time again for the monthly Open MAKE/Young Makers program at the Exploratorium in San Francisco, CA. This month’s theme is Metal and Wire. The Featured Makers will be interviewed by MAKE’s Dale Dougherty in the McBean Theater between 1-2pm. They’ll be talking about their work and process, and taking questions from the audience. Makers include:

* Tim Hunkin has set up shop inside the Tinkering Studio, and will share his humble beginnings.
* David Cole will bring an amazing chain-making machine
* Jay Broemmel will talk about mutant bicycles
* Sam DeRose and Alex Jacobson from the Young Makers program will talk about learning how to build a fire-breathing dragon!

And, between 10am and 2pm, other Bay Area makers and Tinkering Studio staff will be sharing build activities with the public, including how to build scribbling machines and how to create electronic circuits. For a full list of activities and more info, see the Tinkering Studio link below.

Open MAKE: Metal/Wire

 

Intern’s Corner: Autodesk Inventor Publisher Review

MAKE: Intern's Corner


MAKE’s awesome interns tell about the projects they’re building in the Make: Labs, the trouble they’ve gotten into, and what they’ll make next.
Autodesk Inventor Publisher Review
By Nick Raymond, engineering intern

Autodesk recently sent Make: Labs a copy of their latest software, Inventor Publisher 2011. Earlier in the week, I had been using their 3D mechanical design program, Inventor 2010, to design a prototype cluster of light bulbs based on the “Spiderlite” to help our photography staff bring more light into our workshop while taking step photos for the magazine’s articles. With my 3D model already complete, this was the perfect opportunity to test the various features of the new software.

Inventor Publisher supports various 3D CAD formats (Autodesk Inventor, CATIA, Pro/Engineer, DWG 3D, DWF, STEP, IGES, and SAT) in case you choose to generate your 3D model using a different program. However, if you’re already familiar with the Inventor program’s layout, then you will have no problem acclimating to the commands. The Autodesk online videos are a great way to pick up tips and tricks if you want to jump right in, while the Help menu is a great resource that provides animations and instructions that explain the general tools and features.

Inventor Publisher gives you complete control over the appearance of your model. My design required six panels to be cut out of 5mm plywood using the laser cutter that we have here at the Lab. I was able to mimic the wood material type in the model by choosing from a list of preset visual wood styles, where I could then increase/decrease the size of the grain, rotate the pattern, and change the overall color and tones of the wood to produce a custom appearance very similar to the actual wood used for the project. Next I wanted to show a completely disassembled view of all the parts. The “Auto Explode” feature allows you to do just that, using the logical progression of how the model was built in the 3D rendering software to dictate how the model comes apart. To capture the images of the exploded view, you take what are called Snapshots, which are essentially digital still frames of the model.

These snapshots are compiled together and represent the 3D model at various stages of the assembly. Within the snapshots, Inventor Publisher allows you to insert callouts and arrows that help to direct attention to specific details, for example the size and dimension of a particular bolt. You can also add labels within each snapshot and magnify a portion of the model to give it emphasis. These images can then be used to create technical manuals, users guides, installation instructions, and any other form of visual documentation using 3D PDF, Microsoft Word, PowerPoint, raster, and even vector files. Inventor Publisher also provides an option for producing videos in either .AVI or Adobe Flash formats. To do this, the software automatically fills in the gaps between each snapshot to simulate fluid motion and generate the entire animated rendering of your instructions.

But the true power and application of the software is realized when you export your document onto the Autodesk Inventor Publisher server. Once online, other people can access your files by downloading Autodesk Inventor Publisher Mobile Viewer, and interact with your model in a 3D environment. As they follow along with the snapshots and instructions that you embedded in the file, others will be able to manipulate and move your model around in space to ensure that they understand exactly how to assemble your design. Try out the free app for your iPad of iPhone to learn more.

Be warned, this program may require a decent amount of processing power from your graphics card. Check out the Inventor Publisher website for more information about system requirements. Overall this is a great product with stunning visual results and intuitive and easy to use controls. Be sure to check out the quick video that I made which demonstrates how my light cluster design is assembled. Then start designing and documenting your builds!

 


Making a Laser-cut Zoetrope with Processing and Kinect

MZ_Codebox.gif

We’ve been loving the job that Andrew Odewahn has been doing with the Codebox column. But Andrew has always liked the idea of having others contribute to it and explore the Processing language from different angles. So he introduced us to Greg Borenstein, an artist and teacher from New York. This is Greg’s first guest contribution to the column. Greg’s work explores the use of special effects as an artistic medium. He is fascinated by how special effects techniques cross the boundary between images and the physical objects that make them: miniatures, motion capture, 3D animation, animatronics, and digital fabrication. He is currently a grad student at NYU’s Interactive Telecommunications Program. Welcome, Greg! And thanks to Andrew for hooking us up. -Gareth

This codebox shows you how to create a physical zoetrope using Processing, a Kinect, and a laser cutter. The sketch will load in a movie recorded from the Kinect’s depth camera, use the Processing OpenCV library to turn that movie into a series of outlines, and save the outlines out in vector form as a DXF file that can be sent to a laser cutter. I’ll also explain the design of the viewer mechanism that gives the zoetrope its spin.

About Zoetropes

Before jumping into the code, a bit of cultural history and inspiration. The zoetrope was a popular Victorian optical toy that produced a form of animation out of a paper circle. On the circle, zoetrope makers would print a series of frames from an animation. Then the circle would be surrounded by an opaque disc with a series of slits cut out of it. When the paper circle spun, the viewer would look at it through the slits and see an animation. The slits acted like a movie projector, letting the viewer only see one frame at a time in rapid succession, resulting in the illusion of movement.

Recently, people have begun to figure out how to achieve the same illusion with three dimensional objects. The artist Gregory Barsamian builds sculptures that spin around in front of strobe lights in order to produce the illusion of motion. The sculptures consist of a series of different objects at different stages of motion and the strobes act like the slits in the zoetrope to create the illusion of motion (Barsamian may be familiar to some Make fans from our earlier coverage: Gregory Barsamian’s Persistence Of Vision).

Pixar recently picked up the trick to create a physical zoetrope for their lobby. Animators there were convinced that the physical zoetrope is an unparalleled demonstration of the principle of animation: the transformation of a series of still images into moving ones:

So, what’s the recipe for a physical zoetrope? We need a series of images that represent consecutive stages in a motion. Then we need to transform these into distinct physical objects. Once we’ve got these, we need a mechanism that can spin them around. And, last but not least, we need a strobe light to “freeze” each object into one frame of an animation.

How can we do this ourselves? Well, to get that series of objects we’re going to extract silhouettes from a piece of video generated from the Kinect. We’re then going to turn those silhouettes into a vector file that we can use to control a laser cutter to cut out series of acrylic objects in the shape of each frame of our animation.

Let’s dive in.

Recording the Depth Movie

The first thing to do is to download Dan Shiffman’s Kinect library for Processing and put it into your Processing libraries folder. If you’re not familiar with how to do that, Dan’s got great clear instructions on the Kinect library page.

We’re going to use this library to record a depth movie off of the Kinect. (In theory, you might be able to also use a conventional camera and a well-lit room, but what fun would that be?) Thankfully, recording your own depth movie is only a few lines of code away from the Kinect example that ships with the Processing library:

Discussion

Let’s talk through how this works. First, we include the Kinect library and the Processing video library; we’ll need that later in order to record a movie. Then, we declare Kinect and MovieMaker objects. The MovieMaker is the object that’s going to do the work of recording the output of our sketch into a movie file.

In setup, we set the frame rate to 24 so that it will match the movie we record. We also configure the sketch to be 640 by 480 to match the size of the video image that’s going to come in from the Kinect. We do some basic Kinect setup: tell our kinect object to start reading data from the device and enable the depth image. Then we initialize the MovieMaker class, giving it a quality setting, a filetype, and a filename. You can read more about how MovieMaker works in the Processing documentation. It’s important that the frame rate we pass to MovieMaker matches that of the sketch so that our movie plays back at the right speed.

Our draw function is incredibly simple. All we do is call kinect.getDepthImage() and draw the output of that to our sketch using Processing image() function. That will show us the grayscale image representing the depth map the Kinect is extracting from the scene. This will be a black and white image where the color of gray of each pixel corresponds not to the color of light of the object but to how far away it was from the Kinect. Closer objects will have lighter pixels and farther away objects will be darker. Later, we’ll be able to process these pixels in order to pick out objects at a particular depth for our silhouette.

Now that we’ve drawn the depth image on the screen, all that we have to do is capture the result into a new frame of the movie we’re recording (mm.addFrame()). The last significant detail of the sketch is that we use key events to give ourselves a way of stopping and completing the movie. When someone hits the spacebar, the movie will stop recording and save the file. Also, we have to remember to do some Kinect cleanup on exit or else we’ll get some funky errors whenever we stop our sketch.

Here’s an example of what a movie recorded with this sketch looks like:

Now, if you don’t have a Kinect, or you’re having trouble recording a depth movie, don’t despair! You can still play along with the next step. You can download that depth movie of me doing jumping jacks straight from Vimeo: Kinect Test Movie for Laser Zoetrope. I’ve also uploaded the depth movie I used for the final laser zoetrope shown above if you want to follow along exactly: Kinect Depth Test Movie. That later movie features Zach Lieberman, an artist and hacker in New York and one of the co-founders of OpenFrameworks, a C++-based cousin of Processing.

Creating the Laser-Cutter File

Now that we’ve got a depth movie, we need to write another Processing sketch that processes that movie, chooses frames for our animation, finds the outlines of our figure, and saves out a vector file that we can send to the laser cutter.

To accomplish these things, we’re going to use the Processing OpenCV library and Processing’s built-in beginRaw() function. Create a new Processing sketch, save it, create a “data” folder within the sketch folder, move your depth movie into there (named “test_movie.mov”), and paste the follow source code into your sketch (or download it from the lasercut_zoetrope_generator.pde file):

Discussion

If you run this sketch with the second test movie I linked above, it will produce the following output:

…and will also save a file called “full_output.dxf” in the sketch folder. This is the vector file we can bring into Illustrator or any other design program for final processing to send to the laser cutter.

Now, let’s look at the code.

In setup, we load the test_movie.mov file into OpenCV, something that should be familiar from past posts on OpenCV. We also call beginRaw(), a Processing function for creating vector files. beginRaw() will cause our sketch to record all of its output into a new vector file until we call endRaw(), that way we can build up our file over multiple iterations of the draw loop. In this case we’re creating a DXF file rather than a PDF because this format is easier to process for the laser which needs continuous lines in order to produce a reliable output. PDFs produced by Processing tend to have many discrete line segments which can cause funky results when cut with the laser, including slower jobs and uneven thickness.

Now, before we dive into the draw method, a bit about the approach. We want to pull out 12 different frames from our movie, that would make good frames for our animation. Then we want to have OpenCV extract their outlines (or “contour” in OpenCV parlance), and finally we want to draw those in a grid across the screen so they don’t overlap and the final DXF file will contain all the frames of the animation.

This sketch approaches these problems by creating a “currentFrame” variable that’s defined outside the draw loop. Then, on each run of the draw loop, that variable gets incremented and we use it to do everything we need: jump forward in the movie, move around to a different area of the sketch to draw, etc. Finally, once we’ve finished drawing all 12 frames to the screen, we call “endRaw()” to complete the DXF file, just as we called “mm.finish()” in the first sketch to close the movie file.

So, given that overall structure, how do we draw the contour for each frame? Let’s look at the code:

    opencv.jump(0.3 + map(currentFrame * timeBetweenFrames, 0, 9, 0, 1));    opencv.read(); 

This tells OpenCV to jump forward in the movie by a specific amount of time. The 0.3 is the starting point of the frames we’re going to grab and is something I figured out by guess-and-check. I tried a bunch of different values, running the sketch each time and seeing what frames I ended up with and judging whether they’d make a good animation. “0.3″ represents that starting time in seconds.

We want all of our frames to be evenly spaced so our animation plays back cleanly. To achieve this, we add an increasing amount to our jump of 0.3 based on which frame we’re on. Once we’ve calculated the right time, we read the frame of the movie using “opencv.read()”

The next few lines use the modulo operator (“%”) with the currentFrame number in order to draw the frames in a four by three grid. Then, there’s a simple looking OpenCV call that actually is pretty cool given the context:

   opencv.threshold(150); 

This tells our opencv object to flatten the frame to a pure black and white image, eliminating all shades of gray. It decides which parts to keep based on the grayscale value we pass in, 150. But since the grayscale values in our depth image correspond to the actual physical distance of objects, in practice this means that we’ve eliminated anything in the image further away than a couple of feet, leaving just our subject isolated in the image.

If you’re using your own depth image, you’ll want to experiment with different values here until you’re seeing a silhouette that just represents the figure that you want to capture in animation.

The next few lines, wrapped between calls to “pushMatrix()” and “popMatrix()” are probably the most confusing in the sketch. Thankfully, we can break them down into two parts to understand them: moving and scaling our image and drawing the silhouette calculated by OpenCV.

The first three lines of this section don’t do anything but change our frame of reference. pushMatrix() and popMatrix() is a strangely-named convention that makes complicated drawing code significantly easier. What it lets us do is temporarily change the size and shape of our Processing sketch so that we can use the same drawing code over and over to draw at different scales and on different parts of the screen.

   pushMatrix();       translate(x + 20,y);       scale(0.2); 

Here’s how it works. First we call pushMatrix(), which means: “save our place” so we can jump back out to it when we call popMatrix(). Then we call “translate()” which moves us to a different part of the sketch using the x and y variables we set above based on our current frame. Then we call “scale()” so that anything else we draw until the next popMatrix() will be 20 percent the size it would normally be.

The result of these three lines is that we can do the OpenCV part that comes next — calculating and drawing the contour — without having to think about where on screen this is taking place. Without pushMatrix we’d have to add our x and y values to all of our coordinates and multiply all of our sizes by 0.2. This makes things much simpler.

Now, the OpenCV code:

   Blob[] blobs = opencv.blobs( 1, width*height/2, 100, true, OpenCV.MAX_VERTICES*4 );       for( int i=0; i 

This code certainly looks complicated, but it’s not all that bad. The most interesting line is the first one, which calls “opencv.blobs()”. That function analyzes the image we’ve stored and looks for areas that are continuous, where all the adjacent pixels are the same color. In the case of our example movie, there will be exactly one blob and it will be around Zach’s silhouette. Our use of the threshold eliminated everything else from the scene. If you’re using my other example movie or your own depth movie, you may have multiple blobs and that’s OK, you’ll just end up with a more complicated vector file.

And once we get down to it, drawing these blobs isn’t too bad, either. We loop over the array of them and each blob has a points array inside of it that we access in order to create vectors. Basically, we’re playing connect the dots: go from each point to the next drawing lines between them until we complete the whole shape.

And that’s all there is to generating the DXF file.

Preparing for the Laser

After generating this DXF file, you’ll need to bring it into Illustrator or your favorite vector editing program to perform some basic cleanup: group each frame together into a single object, cut out the parts of the silhouettes that overlap the rectangle so that the figure will actually be attached to its base, etc. I also selected 9 of these twelve frames and then duplicated them so that I’d have a looping animation rather than one that reset back to a starting posture. I’ve uploaded the final Illustrator file here for you to look at: contour_animation_for_laser.ai

Once we’ve got the contours cutout, the last step is to design and cut the wheel that they’ll spin on. I acquired a thrust bearing (a kind of engineer’s lazy susan) that would allow my disc to spin freely. My bearing included holes on top for attaching things to it. I measured the distance between those and then put together a design for a disc that could mount onto the bearing and hold each of the frames of the animation:

Contour disc for laser

Getting just the right size for the slots so that the silhouettes would press fit in tightly without any glue took a little bit of experimentation and some false starts on the laser. You can download the Illustrator file for this design here: contour_disc_for_laser.ai

Once you’ve got those two Illustrator files, cutting them out on the laser is pretty much just as easy as hitting print. Literally: you actually start the process by hitting print in Illustrator. You’ve got to fill in a few additional details about the laser’s power and speed settings, but then you’re off to the races. The laser looks like this in action (not cutting a zoetrope part in this case, but it’s the same laser):

Hopefully, this tutorial has given you enough of what you need to start recording Kinect depth data and using it to generate laser-cutable vector files. Have fun!

Get Your Own Laser-cut Zoetropes!

In response to all of the great reactions to this project, I’ve started up a site to actually produce laser-cut zoetropes for purchase: PhysicalGIF.com. We’re offering kits for putting together zoetropes from designer-made animated GIFs. The kits will come with everything you need to assemble a zoetrope like the one shown here: the laser-cut parts, the base, even the strobe light. Eventually you’ll even be able to upload your own GIFs to have them converted into physical form. Head over there now to sign up to be notified when the kits become available.

More:
Check out all of the Codebox columns here
Visit our Make: Arduino page for more on this popular hobby microcontroller

In the Maker Shed:

Makershedsmall

processingCover.jpg

Getting Started with Processing
Learn computer programming the easy way with Processing, a simple language that lets you use code to create drawings, animation, and interactive graphics. Programming courses usually start with theory,but this book lets you jump right into creative and fun projects. It’s ideal for anyone who wants to learn basic programming, and serves as a simple introduction to graphics for people with some programming skills.

 

Join The Arduino Team, Sorta…

Wow, I hope a MAKE reader applies for this! Arduino has partnered with a university in the south of Switzerland on a master degree in Interaction Design. Massimo (Arduino team) will be heavily involved in the teaching and the students will have the chance to intern with them. The location is also great – lakes, mountains and all the rest.

Arduino is partnering with SUPSI (the University of Applied Sciences and Arts of Southern Switzerland) to collaborate on the Master of Advanced Studies in Interaction Design: Students applying for this program will attend courses on physical computing and interaction design held by co-founders of the Arduino project such as Massimo Banzi; furthermore, they will have the opportunity to develop the master thesis in collaboration with Arduino and spend a whole term working with the Arduino platform in order to create innovative projects.

 

Maker Faire Detroit Town Hall, Tuesday, March 22, 6:30pm-8:00pm (ET)

You are invited to:

Maker Faire Detroit Town Hall
Tuesday, March 22, 2011, 6:30 PM – 8:00 PM (ET)

Detroit Public Library
5201 Woodward Avenue,
Detroit, MI 48202

Event Details: Dale, Sherry, Louise and Shauna will be hosting a Maker Faire Detroit Town Hall and Community Planning Meeting on Tuesday, March 22nd at The Detroit Public Library (Main Branch) in the the Old Fine Arts room and would like to invite you to attend. …Read more on EventBrite

Share this event on Facebook and Twitter.
We hope you can make it!

See the EventBrite listing for full details.

Cheers,
The Maker Faire Team

 

Top 10: Rube Goldberg Machines

World’s Largest Lego Great Ball Contraption

GBCs are cool because they’re built separately by many different users, then brought to a convention with the expectation that they’ll work together. I love the dizzying variety of ball-moving gadgets! This one was built by 7 Danish, Belgian, and Dutch builders with a world-record 93 mechanisms linked together. The contraption was featured at Lego World 2011 in Copenhagen.

 

How-To: Documentation Camera Dolly

I’m continually blown away with Steve Hoefer’s work. Check out his documentation camera dolly (complete with instructions):

I know a lot of people who make lots of stuff. They even take the time to share as much as they can about their process. But documenting everything is a pain. It takes time away from the actual doing of the project. And even if you aren’t documenting to share with the world, documenting for yourself is incredibly valuable.

And let’s say you’re trying to show someone how to do something. Cook, solder, crochet, play chess… most anything with your hands. Wouldn’t it be handy to have a camera above your work, just like they have on those fancy TV shows? Yes, it turns out it would. And it also turns out to be pretty easy to make.

This overhead camera dolly holds a camera pointing straight down onto your work surface and it lets you easily move it both side-to-side and toward and away from you so it can focus on any part of your workspace.

Use it for instructional videos, live demos, time lapse videos or film your own cooking show!

• It’s cheap—I built the whole thing for less than $30.
• It’s simple to build. It only take a few common tools and a few hours.
• It’s easily customizable to the size and needs of your workspace.
• It’s versatile. It works with just about any kind of camera from a webcam or cameraphone up to a professional DSLR.
• It’s modular and can be set up and torn down quickly and easily if you want to use it at an event.

More:

 

Engineer Guy vs The LCD Monitor

For a few years now, I’ve had this hare-brained idea to try to separate the layers of polarizing film from a scrap LCD panel and make a polariscope out of them. So whenever I come across a dead one I tear it apart and do some experimenting. Probably been into half a dozen by now. But I’ve probably learned as much, or more, about how they actually work by watching Bill Hammack’s video this week. As always, Bill’s work has something to offer novices, experts, and those, like myself, who know just enough to be dangerous. [Thanks, Bill!]

 


Click here to safely unsubscribe now from "Make: Online" or change your subscription, view mailing archives or subscribe

Your requested content delivery powered by FeedBlitz, LLC, 9 Thoreau Way, Sudbury, MA 01776, USA. +1.978.776.9498