Fuel3D is discontinuing sales of the Scanify in all regions outside the US & Canada.
As part of this process they have opted to make their Professional software free to use for all SCANIFY users.
In addition, while stocks last customers in the US and Canada can get SCANIFY for a reduced rate of $799.99 + taxes, , until 11th August 2017.
It’s important to note that Fuel3D will still honour all 1-year manufacturers’ warranties on products sold by authorised distributors before 11th August 2017. In addition, Fuel3D will support the SCANIFY product until 31st December 2017.
More info and the unlock code that can be used to upgrade Fuel3D Studio to Pro can be found here.
Fuel 3D is a UK-based manufacturer of 3D capturing technologies. If you keep an eye on the 3D market like I do, you might have read that the company recently received € 1.7 million EU Horizon 2020 funding to develop a 3D capture solution for eyewear. And just last month, it announced the CryoScan3D—an enterprise-level foot scanner specifically aimed at the orthotic market.
What I’m reviewing here is their $1500 / €1200 (ex VAT) handheld 3D scanner launced in 2015—the Scanify—kindly provided to me by Beglian reseller KD85.com (thanks, Wim!).
The Scanify is an interesting product, because it’s very different from other scanners. And although it’s marketed as an allround 3D scanner, it’s only usable for a few specific purposes. But it does so in an impressive way.
Right from the start when browsing through the Scanify website it’s clear that the company has a large emphasis on design. The green-and-grey branding is everywhere, as is the “fire up your creativity” mantra. The devices comes in a very green box with Apple-ish attention to detail. It contains the device itself, a power adapter with every thinkable wall socket add-on, a USB cable, Quick Start Guide and tree target discs—more about those later.
The device itself is manufactured with the same amount of care as the packaging. It’s light-weight, but rock solid plastic with rubber hand grips that photographers will recognize from DSLR cameras. The same is true for the large shutter buttons (two of them) and the standard tripod thread. Let’s take a closer look:
The lights you see are the LED Guide Lights. Next to each is a Xenon flash. In the white outline in the center there’s an RGB camera. Another is located in the bottom-center part under a slight angle. According to the specs, they’re both 3.5 megapixel cameras. You can see the power cord and USB cable sticking out of the bottom.
This is where the Scanify differs from other 3D scanners: it has no lasers or a projector that beams infrared (like the Sense and Structure Sensor) or visible light (like the Einscan-S) patterns. Instead of relying on these depth-sensing technologies, it uses photogrammetry algorithms to distill depth information from multiple photos, all taken within a second. This is the same technology used in software only reality capture applications and mobile 3D scanning apps. So to someone like me that has done a lot of both photography and 3D scanning, the Scanify sits in between a digital photo camera and a handheld 3D scanner capable of capturing objects from all sides in a continuous motion.
The advantage of this is speed: capturing a single 3D image is done in a split-second. This doesn’t require the subject to keep still. Another advantage is that photogrammetry usually delivers higher-quality 3D geometry than entry-level 3D scanners.
Connection-wise it’s plain simple: plug the power supply into a wall socket and the USB cable into a computer—in my case a Macbook Pro Retina with i7 processor, running Windows 10 through Boot Camp. More about the software in a bit.
What is missing from in the box are holders for the target discs—which are present on the photos on the product website. Fuel 3D does offer models of a handheld holder and a 45° stand, but the fact that buyers have to 3D print these themselves—like I did—is a weird contrast for such a polished device. I don’t thinks its main audience is 3D printer owners to begin with.
It’s good to know that the Scanify comes with basic (Studo Starter) software, but there are two more paid, subscription-based options:
Studio Plus is €44.99 (ex VAT) per year and Studio Advanced €89.99. Whether or not you need the paid features will mainly depend on your choice to use other software to edit the meshes or prefer a single-software workflow. And if you want to be able to stitch together a lot of captures into a single model, which I will show you later. Either way, these prices are fair and not a breaking point for professionals in my opinion, as long as Fuel 3D keeps actively developing and improving the studio software. Which it does, because there will be a Mac version this year—something many creative professionals like myself will like.
This review is based on my experience with the 30-day trial of Studio Advanced, which is full-featured except for a limit of the total amount of cloud rendering jobs you can submit. Enough to decide if it’s worth the price.
The user interface of Fuel 3D Studio is well designed in the signature green-grey color scheme. It’s as much a capturing as a project management tool: everything you do can be organized in projects in the left sidebar. On top of that you can add tags to captures to find them later or mark things as favorites.
The small icons in the bottom right are for exporting (as .STL .OBJ or .PLY) or sharing directly to the popular 3D showcasing website Sketchfab.
Personally I think these icons are a bit small and in an awkward place. If you would want to use this software on a Windows tablet—more about that later—it will be hard to comfortably tap them.
While not very ergonomic, it’s possible to hold the Scanify in landscape mode, but you have to manually rotate the viewport. The viewfinder window detects the target disc and will display a green circle around it when you’re at the right distance. This is between 30 and 40 cm. Combined with the narrow field of view of the cameras, this is the main limiter of the practical use of the Scanify. The maximum size of objects that can be captured is about the size of a sheet of A4 or letter paper. The wooden Buddha hanger is 16 cm wide from ear to ear.
I discovered that the lighting in the room has consequences for the resulting 3D capture. I first did the wooden Buddha hanger above with just the flashes in a studio lit through windows during the day:
I found the textures of the capture above quite dark, especially considering that the xenon flashes are very intense. So I tried another capture with the 3 softboxes on that I used to do the photography for this post:
This way the textures are brighter and more natural, but there’s a catch. If you select the MatCap Render option in the two embeds above, you can clearly see that the first capture has a lot more geometric detail. The wrinkles and definition in Buddha’s face are way better. So while some extra lights do improve texture quality, they counteract the way the flashes are used to let the algorithm determine depth (probably relying on comparing hard shadows).
Here’s the result from the scan of the sneaker. It’s a good example of the difference between the high details on the front and lower details sides:
While searching for other objects to scan, I realized I haven’t many that fit the field of view of the Scanify—and meet the list of criteria: opaque, bright, matte. So again I had to resort to my toddler daughter’s toy box. Here’s the Teddy Bear you may recognize from my other 3D scanner reviews:
Again: the texture is a bit dull (I’d say a bit 70’s color-wise) but the texture-less MatCap render mode shows that the Scanify did a good job on the knitting pattern. To put this result into perspective (no pun intended), let’s take a look at the same Teddy Bear captured with a regular digital camera (specifically a Sony RX100M2 with 20.2 megapixels) and the Autodesk ReCap 360 software I reviewed earlier—an online solution for calculating 3D data from a set of regular photos through photogrammetry, just like the Scanify:
If you set both embeds in MatCap render mode, you can see that the geometry of the knitting pattern on the bear has finer details with the Scanify. This proves that Fuel 3D’s flash-based algorithm is one of a kind and indeed capable of capturing impressive 3D geometry within a fraction of a second.
Cropping & Stitching
There’s, however, a big difference between the Scanify’s approach and every other photogrammetry method—or depth-sensor-based 3D Scanners. The others are all designed to generate 3D data by capturing a subject from a wide range of angles, all the way up to 360 degrees. This way they can differentiate between the subject and the background, and separate them. The Scanify results always contain the background. The Teddy Bear was shot standing in front of a white wall, so the geometry on the edges is weirdly stretched.
There’s no way to automatically remove backgrounds in Fuel 3D Studio, so you’ll have to use the manual Cropping Tool. This allows you to create multiple positive or negative cropzones. The selection mechanism automatically connects the points you set to create a shape. This works fine for most simple shapes, but I had quite a few occasions where the selection became a bit of a mess.
If you do need more geometric details on the sides of your object, the Fuel 3D Studio has a feature to stitch together multiple (6 in the free version and 9 and unlimited in the Plus and Advances paid versions respectively) captures into a single 3D object. This process is semi-automated and to show you how it works I turned to my 3D printed Yoda bust.
I took 3 shots of Yoda from different angles and manually cropped them with the build in tools. It’s absolutely required to remove both the background and any details towards the sides where the image appears to be stretched. I’m an advanced user of both Adobe Illustrator and After Effects and do a lot of bezier-based design and rotoscoping work with a Wacom Intuos, but it still took me a while to crop Yoda. Mainly because making selections on a 3D object this way depends on an algorithm that wants to connect the dots in ways I had to work against. Personally, I’d rather draw a selection freehand style with my Wacom Stylus for precision. Anyway, here are the crops (I made the actual captures of Yoda on our dark studio floor):
After cropping, you have to add the captures to a so called Scan Pool, so the software knows they belong together. Here you can also put them in the right order to help the stitching algorithm.
In the next step, you have to add the captures to a Stitch Group, from which you can start the automated stitching procedure by clicking the green Stitch button. This works in two phases. In-between you can set some preferences, like you can see in the screenshot below.
Depending on the complexity of the captures, this procedure can take a while. On my 2.6 Ghz Quad Core i7 it took over 15 minutes, which I think is quite long for 3 shots. The alogirthm did a decent job of finding the overlaps in the geometry and fusing the captures accordingly. I was, however, quite surprised to notice that all color information had been discarded! Here’s the result:
I’ll be honest about this feature: it sort of works and the result is better than I expected. But it’s not perfect: the top part of Yoda’s face is fused okay, but his neck clearly isn’t. This could be a result of the way I cropped the images. And it probably needs a lot more overlapping captures for this angle of almost 180 degrees. But realistically speaking I found the manual part of this procedure very tedious. I wouldn’t want to do this for 6, 9 or an unlimited amount of photos. Especially when there are software and hardware-based solutions that do the whole process automatically.
The above is true for objects that don’t move so you can take an endless amount of photographs from every angle. For things that move, like people, software-only photogrammetry solutions are often not suitable. And while scanning people with depth-sensor-based 3D Scanners works pretty well (just check out my itSeez3D Review) it is because they’re often not detailed enough to be messed up by slight movements. Body movements are already hard to control for most people, but controlling facial movements is simply impossible. And this is the very purpose the Scanify excels at…
It makes total sense that Fuel 3D got EU funding to develop 3D scanning technology for the eyewear industry when you start capturing faces. Let’s dive right in and start with my favorite human test subject and business partner Patrick (of Animator’s Toolbar for Photoshop fame). I’m grateful that I can use him as an example for this post, because these Scanify captures really do capture a great amount of facial detail…
Honestly, I expected a higher resolution texture map from a device that has two 3.5 megapixel cameras. The texture maps exported with the .OBJ are simply a single photograph (in old-skool .BMP format) of 768 x 512 pixels—a little under just 0.4 megapixels. I’m not sure why the texture is so low-res: the algorithm clearly uses higher-res images to get the geometric detail. There’s a work-around to increase the texture quality by shooting a separate high-res photo and aligning that with the texture image (which isn’t warped in any way) as explained in this example from Scanify user Marc Wakefield. But it would be a lot easier if Fuel 3D makes it possible to export the full 3.5 megapixels of the camera for the texture map.
That being said, the big upside is the geometric detail! I can really look at the MatCap render mode and think: “How did this happen in a fraction of a second?” I must emphasize that all examples here are done with the cloud rendering option—the locally processed results are simply unusable in most cases. The cloud rendering is pretty fast, too—just a few minutes. And I’m very sure that this cloud algorithm will improve vastly over time, since it was introduced only a few months ago.
It’s a pity that the geometric quality is only present on the planes that face the camera straight-on. The sides of noses, for example, are simply stretched geometry and texture. And even though the Scanify has two cameras, it didn’t capture Patrick’s nostrils (he has them, I checked). Theoretically, you could make two more captures from the sides, like I did with Yoda, but the cool-down period of the flashes prevents you from doing this fast enough for the person to keep still.
Another noticable artefact is the rendering of eyes. The only reflective part of the human face looks very Terminator-ish, probably because of the reflections of the harsh flashes. Pupils are also strangely extruded.
If you follow me on Instragram, you have read that I was wondering if the Scanify is the ultimate 3D Selfie Maker. It’s certainly possible to shoot yourself:
Before I wrap up this Review, I’ll use this ego shot to get into the last few features of the Studio software—the Remeshing Tools (all part of the paid Plus and/or Advanced version)
Hole Filling, Smoothing and Decimation are obvious and work as expected. The Volumize Tool is especially handy for Scanify captures, because they’re a single layer of polygons instead of a volumetric, manifold 3D object. If you want to use them for 3D printing, for example, using this feature is a must. The results greatly depend on how you cropped the capture beforehand. Here’s my face in Extuded Surface Type (which adds depth, like a mask) and Flat Surface Type (which gives a flat bottom surface, ideal for 3D printing):
The Mirror Surface Type is exactly what you think it is: a face on either side (if you have small children and often wish you had this second pair of eyes in the back of your head… now you don’t anymore—it’s scary).
Practical Uses & Portability
Again, I can totally see the use for this in the eyewear industry, especially tailored (sun)glasses. But for that I think the ears are a necessity. The Scanify is also great for capturing fine textures in small objects, so I can also imagine its use in archeology (fossils) and forensics (footprints). Fuel 3D also sells Press-and-Scan Compound that can be used to create scannable imprints of objects that are otherwise impossible to scan: like reflective, metal objects.
But there’s a problem for any use of the Scanify outside: it’s not portable. I wanted to scan some tree bark outside, but the fact that is has to be connected to a wall socket for power prevents this. But it’s not completely impossible. In fact, because the Scanify draws just 2A of electricity, you can power it with a decent USB power bank. Fuel 3D even sells a special Mobile Package with a short USB-to-power-port and USB-to-microUSB cable. This set could be used to take the Scanify outside with a laptop. Better even, you could also 3D Print a pair of mounts for a Dell Venue 8 Pro Windows tablet (you can get one for as low as $250) and go completely mobile.
Although I’m not sure how well an underpowered tablet will handle the high-poly 3D objects and must admit that this approach feels a bit hack-and-slash for a product that’s otherwise very polished, I actually think everyone who buys a Scanify should also get the tablet and and do this mod. If only used for capturing on the go, it will give a great amount of freedom to the product that really makes it a 3D Camera. The Cloud Rendering option is great in this scenario, if only it could be set as a default and prevent the local rendering from happening automatically as it would drain the tablet’s battery.
It would be nice if the whole mobile contraption would fit into the Neoprene Soft Case that Fuel 3D sells for a reasonable price, but judging the images this isn’t the case (no pun intended).
As you might know, I write this blog from the perspective of a creative professional. So I’m always thinking: what can I do with this creatively? Well you could use it for face replacement in 3D, VFX or Game Design work where missing ears fit the scene:
Or you could add a different kind of ears to the mirrored model, make a hole in the top and 3D print a narcissistic coffee mug:
I would love to hear any other creative possibilities in the comments!
Let’s start with the fact that for $1500 / €1200 (ex VAT) the Scanify is affordable to any professional that has a specific purpose for it. Even adding a few hundred extra to get the complete Mobility Package with the Dell Tablet and Battery Pack so you can make a dedicated ready-to-shoot device out of it is probably worth a professional’s money.
But you do have to have a very specific purpose for this device. If you want to make full 360-degrees 3D scans, the Scanify is not for you. You can do it occasionally if you have loads of time and patience, but the scanner is actually more appealing to people that don’t. If you want to capture 3D geometry in a snap-second of shallow, textured objects that are not bigger than a sheet of office paper, the Scanify can generate impressive results.
Remember this product just a year old and the software has had a great amount of updates since its release—in particular the great cloud rendering feature. My biggest wishes for the product would be high-resolution texture maps and some kind of automatic background removal. I would have no problem with shooting objects in front of a green screen for this. Finally, I’m no fan of markers in general and I wish the Scanify could work without the discs. If Fuel 3D would add face detection for portraits to determine the ideal distance, maybe it can could drop the targets for that purpose and make face scanning even faster.
I’ll keep following Fuel 3D’s developments, because I’m very curious to how they will use this technology in future products. Hopefully, they’ll also keep making affordable hardware next to their industrial products.
As always, I hope you enjoyed reading this post. If you have any questions, please drop a line in the comments or send me a Tweet. And if you think this article is useful for your friends and followers, I’d appreciate it if you share it by using the buttons below.