At first the software seems to have the same consumer-focussed approach, when it welcomes you with a happy animated explainer video. But after that it’s a great looking and well functioning (Windows-only) application, but there’s a problem when using it with the XYZ 3D Scanner or Intel RealSense F200.
You might not have been at our studio, but I can tell you that the flat screen is on the left wall, not the right as pictured in the scanning preview above. And while the Sense for RealSense application has more settings than the XYZscan and RealSense SDK sample combined (screenshots of the Advanced Features below), it misses the crucial one of being able to mirror the camera feed (remember the F200 is a Front Facing 3D camera so its feed is mirrored by default), which makes it nearly impossible to scan away from yourself. The manual does mention a “Flip Scan View” option, but that’s only available with the SR300 hardware and just rotates the feed instead of mirroring it.
I did find a workaround for to flip my complete laptop screen. Read about it in the XYZ / RealSense F200 Review.
If you’re curious you can check out the results of the sneaker scan above with the geometry resolution set to its lowest and highest setting, but I’ll continue with my two test subjects Teddy and Patrick for the sake of comparison. Switching between scan modes is as simple as clicking one of the three always-visible buttons:
Body scanning isn’t available with F200 hardware (that should work with the R200 and SR300) so I started with a head scan. The actual scanning procedure is visible in the first image of this section. I noticed that the software asks the you to hold still once in a while to “capture a good image” similar to the mobile apps I tested for my Structure Sensor Review. This is nice because this allows the camera to capture the best-possible key frames to generate the texture map. Unfortunately the F200’s camera isn’t very light sensitive, so in normal indoor lighting conditions (it doesn’t work outdoors) you’ll always have blurred textures (like the example of the sneaker above). The examples below are both taken with three soft-boxed studio lights, which greatly improves the texture quality.
After capturing and waiting for some calculations, you’ll get to the screenshot below. And yes—finally—editing features!
First things first: see that texture quality. It’s the best I got with the RealSense F200 compared to other software I tested it with like the XYZ Software and RealSense SDK. And this is also scanned by attaching the scanner horizontally to the bottom right corner of my laptop screen and walking around Patrick. As you can see I accidentally forgot to scan to scan the top of Patrick’s head (that’s also hard with a complete Macbook Pro in your hands, by the way), but it’s perfect for testing the Solidify feature, that makes the scan into a watertight, or manifold, 3D mesh suitable for 3D printing. It’s completely automatic and has no settings, but it did a good job in my tests. The software tries to color the newly generated geometry in a way that blends with the rest of the texture. This works for smaller areas, but is understandably harder for larger holes.
You can Crop the result, use the Erase feature to manually remove unwanted parts or use the Trim feature pictured above to make straight cuts. Which is very handy if you want to 3D print bust scans like this. Finally the Color feature let’s you tune the brightness and contrast of the model with simple sliders, but that wasn’t necessary in this case. Here’s the result as an interactive Sketchfab (you can follow me there too!) embed:
The result above was done with the Geometry Resolution set to the lowest value, delivering a polycount of just under 100K. Increasing the Resolution can double the polycount and does add some extra details in the geometry, but also more noise in the mesh. And apparently this forces the software to prioritize geometric detail over texture quality, at least with my hardware setup.
I like the “low-poly” version better and I guess Patrick agrees.
To wrap up this review I naturally had to scan Teddy one last time. And to keep the scanner as still as possible to prevent motion blur in the textures, I mounted it on a small tripod with a tie-wrap. Below you see that the device is mounted towards me, so I didn’t have to mirror the camera feed. It also show my light setup for these scans (two 40 x 40 cm softboxes with 65 Watts, 5500 Kelvin studio lights)
And since I was now seeing myself in the background, I couldn’t resist making a selfie:
— Nick Lievendag (@NickLievendag) July 25, 2016
At this point you probably think capturing the Teddy Bear is just a quick formality to compare the quality. But I was totally surprised that the Sense for RealSense software had a really hard time capturing it. In fact, it failed! I tried it both turntable-style like above and by walking around the object like I did with the sneaker, but the software consistently lost tracking when I rotated around the side of the bear towards its back.
To find out why this was happening, I read the PDF manual and discovered that the tracking algorithm (currently) only uses geometric data—not also the RGB color information like most modern 3D scanning software packages (including, I guess, the RealSense SDK and XYZscan software itself).
However, the manual did note that the Sense for RealSense software has a special Tracking Assist feature. Depending on the size of your object, you need to print a single or dual sheet (I printed the dual sheet version on a single large A3-sized sheet) pattern with a regular printer and turn on the feature in the Advanced Settings.
But while the Tracking Assist feature looks good on paper, it didn’t solve my problem with scanning Teddy. The software still lost tracking around the 90-degrees mark when it can only see the side of the bear. Maybe this feature does work on better RealSense hardware like the SR300, but it wan’t effective for this particular test. Luckily I was able to solve this by adding some extra geometry to the scene in the form of a few Nespresso cups.
I had to put stickers on them because the cups alone are are too glossy to be detected. But… with the cups it worked at once, without any tracking problems!
The result above is captured at the higest Geometry Resolution setting (lowest-poly version here). Compared to the result from the RealSense SDK application, it’s notable that although the polycount is the about a bit lower (218K vs. 262K) it has more geometric detail. If you set the embed above to MatCap Rendering mode, you can see a hint of the knitting pattern for the first time—and check the eyes! The texture is a bit softer, but still usable.
And of course, because of the editing features you can easily remove parts of the scan you don’t need. I tried the Trim feature for this first, but drawing a trim line between the bear and the book resulted in the bear being deleted. According to the manual, the Trim feature removes the smallest part and the book and turntable probably have a larger volume. So I used the Crop tool to delete most of the unwanted geometry and removed the rest manually with the Erase tool. Then I used Solidify to close the holes in the legs.
What’s also great are the Export and Sharing options of Sense for RealSense that appear when you click Finish.
Hitting Save will allow you to name the model and save it to the Library of the application itself, which can be accessed by clicking the icon with the 9 squares in the top left corner. Exporting can be done in .OBJ, .STL, .PLY and .WRL format, so plenty of choice there too. You can also connect the software to your Sketchfab account and export directly to this great plaform for showcasing and sharing 3D models. I couldn’t test the Facebook option because it apparently supports only Profiles, not Pages (I only have a Facebook Page), but it does also upload the model to Sketchfab first and then posts that to your Timeline. Facebook supports interactive Sketchfab embeds by default, so I just copy & pasted the Sketchfab URL to my page manually: