With 2016 coming to an end, it’s time to look back at another year. For me it was the year I decided to put my blogging and R&D focus entirely on 3D capturing technologies. Without any regrets! It’s a great journey so far and I’ve tested a lot of great hardware and software in a market that’s changing faster than many people realize.
The very first 3D scanning hardware I reviewed was 3D Systems’ first generation Sense scanner in February. Because that device was part of the consumer-focused Cubify brand that was discontinued in 2015, I didn’t expect the Sense to get a successor. But I was wrong and today I’m writing about my experience with the Sense 2 — or as 3D Systems calls it, the “Next Generation Sense”. It’s now targeted more towards professionals, but still has the consumer-friendly price tag of $399.
My review model has been kindly provided by 3D Printer and 3D Scanner store Machines 3D!
In this Review, I’ll compare the 3D Systems Sense 2 to
- It’s predecessor, the original 3D Systems Sense 3D Scanner (which I’ll refer to as Sense 1)
- The XYZ 3D Scanner, because that features the Intel RealSense F200 sensor which has been succeeded by the SR300 sensor in the Sense 2.
- The Structure Sensor, which 3D Systems also used to sell under the name iSense. This rebranded device has recently been discontinued. (I’ll refer to it as Structure Sensor)
Differences with Sense 1
The Sense 2 has the exact same form factor as the original. It’s actually the same plastic housing, except that the inside of the handle is now made of transparent plastic instead of the original’s milky white color.
It still has a tripod thread on the bottom, which is nice. But it still hasn’t got a button to start and stop the scanning process. I still think this is a huge design flaw — even the $199 XYZ 3D Scanner (review) has a button! The housing has a perfect place for it…
Technically, the Sense 1 & 2 both use infrared depth-sensing technology. The difference is that the original version contained a PrimeSense Carmine 1.09, while the new one features Intel’s latest SR300 depth sensor.
It still hasn’t got a button to start and stop the scanning process
On paper this means the depth camera resolution has gone up from 320 x 240 to 640 x 480 pixels. The color camera resolution has made an even bigger jump, from 320 x 240 to Full HD 1920 x 1080 pixels. Both specs state a depth resolution of 1mm at a distance of 0.5 meters.
But the Sense 2 is not better in every way. While the minimum scan distance is the same at 0.2 meters, the maximum distance of Intel’s SR300 hardware limits the Sense 2 to 1.6 meters (Intel says it’s 0.75 meters for 3D scanning, by the way) while its predecessor could scan at 3 meters — almost twice af far! Because of this, the maximum scan size also dropped from 3 x 3 x 3 to 2 x 2 x 2 meters. Realistically though, I can hardly think of objects bigger than 2 meters that one would like to capture with a handheld scanner like this.
It’s also worth knowing that with the Sense 2, there’s no longer Mac support, so Apple users like me have to use Windows 8 or 10 through Boot Camp. This is true for many 3D scanners by the way, and I personally don’t mind rebooting to Windows very much to make scans. Honestly, the Mac version of the original Sense wasn’t as good as the Windows version.
And as a plus, the new Sense for RealSense software from 3D Systems offers a great experience for the Sense 2 or any other RealSense-based scanners — and it’s free and unlimited. I reviewed this software separately when I tested the XYZ 3D Scanner (which contains the older RealSense F200 which is succeeded by the SR300 used in the Sense 2) because it gave better result’s than XYZ’s own app.
Anyway, let’s start scanning!
The first thing I was curious about was the geometric quality of the scans from the Sense 2. I still have the foam mannequin head at the office, so I decided to do a direct comparison with the 3D model from the Sense 1. For this I set the Geometry Resolution to the highest setting.
While the scan from the Sense 2 (right) has almost twice as much polygons — 348k vs. 180k — this doesn’t bring out more detail. The Sense 1 result (left) shows more geometric details, like the wrinkle above the eyes and the definition of the lips and nose. The Sense 2 scan also contains more geometric noise at the highest resolution setting. More about that later.
The Sense 2 scan has almost twice as much polygons than that of the Sense 1, but this doesn’t bring out more geometric detail.
So far, not so good. The Sense 2 FAQ also honestly states that “the new Sense does not scan uneven textures like hair quite as well as the old one did”. But if you have read my Sense 1 Review, you know that I was actually surprised by its geometry capture. The problem, however, was the fact that the color capturing capabilities were below any level — they simply sucked and made the scans unusable for any purpose other than single-colored 3D prints.
For many upcoming purposes, such as online viewing, games and Virtual Reality, texture quality is getting increasingly important. I dare to say that for many of these cases, good texture capture — and mapping — can compensate for lower geometric detail. So let’s take a look at the textures the Sense 2 captures with its HD RGB camera.
For comparison, here’s Teddy Bear (18cm tall) scanned with the Sense 1:
As you can see texture quality is poor: blurry and not very bright. If you’re curious about the geometry, you can set the Rendering style of the Sketchfab embed above to MatCap through the gear icon in the bottom right corner — it has a polycount of 207.5k.
And here’s Teddy captured with the Sense 2:
A clear improvement in texture quality! Not museum-quality, but it’s certainly usable for many purposes. Keep in mind though that I used a small studio light set with softboxes for almost all my Sense 2 scans. The scan above was done with the Geometry Resolution slider at 3/4 which I call “medium-high”. At this setting the polygon count is 323k. A bit higher than the Sense 1, but with a comparable amount of details. Increasing the resolution to the highest setting doubles the polycount, but doesn’t add more details — just more noise.
A clear improvement in texture quality! Not museum-quality, but it’s certainly usable for many purposes.
And if you’re wondering how the Intel RealSense SR300 sensor inside this device compares to the older F200, here’s the same Teddy captured with that under similar circumstances:
With less than 100k polygons and blurrier texture, the F200 certainly delivers less quality than its successor.
Lastly here’s the same Teddy Bear scanned with the Structure Sensor on an iPad running itSeez3D 4.1(review) which has been updated recently with better object scanning. Even though it was cloud-processed, the polycount is 50k, which makes it the least-detailed scan of the above.
Flexibility & Ease of Use
At 18cm tall, Teddy is a bit small than the the minimal scan size of 20 x 20 x 20 cm, so let’s move to something bigger. This is also a good moment to demonstrate that you can scan objects in two ways. The first is by walking around the subject with the scanner in one hand and a laptop or tablet in the other.
Or you can put your object on a turntable and rotate it in front of the scanner. You can do this from a tripod, which — if you move the turntable very slowly and use a light kit — ensures minimal noise in the geometry capture and reduces the chance of motion blur in the color textures.
While that might work for some objects it didn’t for the bust above, because I wasn’t able to capture under the nose and chin. So I opted for best of both worlds: using the turntable, but keeping the scanner handheld. Here’s a video of the process (played back at 4x speed). It also shows how the capturing looks in the software.
I used the help of a second person to rotate the turntable to shoot a better video, but you can also do it alone with two hands. In this mode the lack of a start-stop button is not a big problem, because you can sit at a table and use one hand to operate a laptop and rotate the turntable.
Capturing is easy and I was surprised by the tracking of the Sense 2
As you can see capturing is easy and I was surprised by the tracking of the Sense 2, both with a turntable en when walking around an object or person. Below is the scan with the Geometry Detail set to the lowest setting. I also made a scan at the highest setting, but in this case the difference was minimal (172k vs 190k polygons):
In comparison, the polycount of the same bust scanned with the Structure Sensor & itSeez3D is just 48.5k and the Sense 2 scan contains more geometric detail in the hair and eyes areas.
Af far as I know, the current version of Sense for RealSense still doesn’t have RGBD tracking, which also tracks an objects RGB texture data on top of detecting patterns in the geometry. While the SR300 inside the Sense 2 has greatly improved tracking over the older RealSense F200, it had some troubles with my Old Jar’s lack of geometric features.
Honestly, I think I could have worked my way around that with a few extra tries, but for the sake of this review it’s good to know that if that doesn’t work, you can print the Tracking Assist PDF and enable the similarly named feature in the Settings.
I put the print on a piece of cardboard so I could put it on the turntable without bending and it did make the scanning process for this object smoother. Here’s the result:
For the example above, I used the editing features of the Sense for RealSense software, including the hole-filling feature, which works well in most cases. It also tries to complete the texture. You can the unedited scan here.
I won’t go into the editing features of the software in detail, because I’ve already written a separate review about it. But here’s a GIF to show how the Crop, Trim, Erase and Solidify features can be used to quickly clean up a scan. After that, you can also tune the brightness and contrast and have your model ready for export to Sketchfab or a 3D file (.OBJ, .STL, .PLY or .WRL).
I get a lot of questions from readers that want to use depth sensors like this for 3D scanning people instead of (just) objects. A lot of times this is to set up an affordable mini-mi 3D printing service. So I know many of you have skipped to this part of the review. Let’s check it out.
The software offers a “Head” and “Body” mode, but the former is actually a Bust mode, which is nice. It cannot be underestimated that you need proper lighting for this, which requires a more powerful setup than objects to because of the larger scan area. I would advise to scan busts of people sitting in a solid chair, so their won’t move and you can better scan the top of their heads.
Increasing the Geometry Resolution does increase the amount of geometric detail, but makes the texture mapping worse.
Before looking at a scan from the new Sense 2, let’s revisit the scan of my business partner Patrick made with the first generation Sense 1:
That didn’t make me very happy. Mostly because of the textures, because the geometric detail is actually still usable for small-scale single-color 3D printing. Below is a scan made with the Sense 2 with the Geometry Resolution set to the lowest setting. It has a similar polycount as the Sense 1 scan (~100k polys) but has a vastly better texture:
Increasing the Geometry Resolution does increase the amount of geometric detail, but also introduces more geometric noise. And possibly because of that, the texture mapping gets worse. I encountered the very same behavior when scanning people with the XYZ / RealSense F200. Here’s an example with the Geometry Resolution set to Medium-High (420k polys):
Regardless of the Geometry Resolution, I noticed that the software has problems with texture mapping the eyes. If you rotate the embeds above, you can see that the eye texture is mapped onto the top of he eyelids.
The scans above are made with a softbox light setup. As an experiment, I wanted to test if I could make the Sense 2 more mobile for scanning people. And as a surprise, I discovered that the cheap battery powered LED I once bought on Amazon (more about that and my softbox setups here) came with an tripod thread adapter.
The surprising part is: it actually works pretty well. You do need to keep a more or less set distance from your subject to prevent too much lighting variations, but otherwise the result is similar to the softbox scan with a lot less hassle and fully portable (the Sense 2 is powered over USB and the LED has it’s own USB-chargable battery).
But altogether, to me the results are still a bit “uncanny valley” because of weird texture mapping the eyes. Below is a scan taken at the very same moment, with the very same softbox setup, with an iPad mini 2, Structure Sensor and itSeez3D 4.1:
While the polycount of the model above is less than half of that of the Sense 2 scans, the quality and the mapping of the textures compensate for this in my opinion. And this is just the photo quality of an old iPad mini 2. Imagine that of an iPad Pro. I guess Intel can never put an iPad-level RGB camera in their RealSense depth sensors at the current price point, so texture-wise the Structure Sensor wins.
Altogether, to me the results of people scans with the Sense 2 are still a bit “uncanny valley”
The eye texture mapping problem is worse if a person has very small eyes, like myself. The scans below are taken with the Sense 2 and Structure Sensor / itSeez3D right after each other. The first one looks like I used some kind of funky Snapchat filter…
What about Skanect?
This is the point where some people might ask me (and believe me, many do) if using the Structure Sensor in combination with Skanect instead of itSeez3D will improve the geometry while keeping the texture quality high. This question has become more relevant than before, since itSeez3D now has a subscription model and/or charges per scan (depending on the chosen plan), while Skanect (which costs $129 for the Pro version once) and Sense for RealSense (which is free) let’s you make — and export — as much scans as you want.
I’m still not ready to give you a solid yes or no on this. The current public version of Skanect does allow capturing more detailed geometry with the Structure Sensor, but the textures aren’t very good. I’m still testing the Beta version which promised to solve this, but you’ll have to wait a little longer before I can say if it does.
Full Body Scanning
Last but not least, I’ve tested the Full Body Scanning capabilities of the Sense 2 in combination with 3D Systems’ Sense for RealSense software. While Bust Scanning was a rather smooth operation, increasing the scan volume to body size was surprisingly hard.
The first thing I noticed was that the sensor had huge problems with Patrick’s grey denim jeans, which aren’t even that dark. It would only capture them if I went very close, like 15-20 cm distance. And only at a straight-on angle. This not only means that it was out of the focus zone of the RGB camera, but it also made it harder for the software to track the bigger picture. On top of that, scanning below-the-waist areas at this distance could be uncomfortable in many professional cases. And dark pants are very common.
The same problem was present for Patrick’s hair which… erhm… went well with his choice of jeans color. I could scan his hair, but only close-by and straight-on, which is almost impossible because he’s taller than me. This problem only got bigger when we tried scanning on an automatic turntable. The scan below was made on the turntable and it took 6 rotations to get this — still partial — result.
Again for comparison, this Structure Sensor / itSeez3D scan was made in just one rotation. It didn’t fuse the beginning and end of the 360° rotation perfectly, but is otherwise a good likeliness. And it was a smoother user experience.
By now I’ve scanned hundreds of people and objects with many scanners. So to get a better sense (pun intended) of the relative first-time user experience, I asked Patrick make a Full Body Scan of me with the Sense 2 and Structure Sensor.
The Sense 2 scan took him over 5 minutes — and 4 tries that I really didn’t feel like sharing — to make while the Structure Sensor scan was done in under 2 minutes — in one try. That is significant for body scanning, because it gets harder to stand totally still every minute.
Usually I start with this section, but in this case I have saved it for last — for a reason.
Let’s start with the fact that for $399, the Sense 2 is affordable compared to other handheld 3D scanners. It works on most Windows machines (see hardware recommendations here), so most people don’t need to invest extra hardware (except for a light kit!). It also comes with great software that is totally free — no subscriptions, no limitations — and very user friendly. You can use this software to scan, edit, share and archive your scans.
But technically, it’s just an Intel RealSense SR300 sensor in the same simple (it looks better than it feels), plastic housing as the Sense 1. Nothing more. Not even a start/stop button, which I truly missed. Te housing puts the sensor in portrait orientation. This combined with the handheld grip makes it a bit more usable (maybe also a bit more professional-looking if that’s important for you) as a handheld 3D scanner if you use a laptop. But if you use a Windows tablet, you might prefer something that you could clip onto it so it works more like an iPad / Structure Sensor (iSense) combo.
It’s just an Intel RealSense SR300 sensor in the same simple, plastic housing as the Sense 1. Nothing more.
Actually, Gaming PC-maker Razer sells the RealSense SR300 as a 3D webcam under the name Stargazer for $149,99 — less than half of the Sense 2. And Creative sells it under the name BlasterX Senz3D for $179.
Both devices should work with the free Sense for RealSense software. But, honestly, this kind of feels like ripping-off 3D Systems. They really did a great job making the software.
After discontinuing the Cubify brand, I expected that 3D Systems would stop selling the Sense 3D Scanner after they sold out. But they didn’t. They recycled (okay, slightly updated) the housing and put Intel’s latest depth sensor, the SR300 inside. The Sense 2 is more expensive than other SR300-based devices — more than twice as much — that can achieve the exact same results. The form factor of the Sense 2 is a bit more 3D Scanner-like, but unfortunately 3D Systems didn’t put a start/stop button on it. Mac users are out of luck, but the Windows software is a vast improvement over that of the Sense 1.
Looking purely at geometry capture, the Sense 2 is a small step backwards. The Sense 1 was better at capturing small details and dark colors and could be used from a farther distance to scan larger objects. In practice though, I don’t think these difference are significant for the target audience of infrared 3D Scanners. If you’re thinking about upgrading from a RealSense F200-based device, there is a jump in geometry resolution.
Texture quality has made a giant leap compared to both the Sense 1 and the RealSense F200. The RGB camera of the SR300 is Full HD. It’s still not as good as a dedicated iPad camera RGB camera, so it will probably never win from a Structure Sensor when it comes to texture quality.
If you plan to use the Sense 2 to capture medium-sized (20 – 40 cm tall) objects on a manual turntable with a simple studio light setup, you can get good results with little effort. In some cases, you can try to get a bit more geometric detail out of the sensor by increasing the Geometry Resolution in the settings panel. In most cases though, this mostly introduces more geometric noise and makes the texture mapping worse.
If you plan to use the Sense 2 to capture medium-sized objects you can get good results with little effort.
While object scanning is fool proof, capturing people requires a lot more practice — and patience. Even for experienced users, it’s not predictable enough to make a good capture of a human bust scan every time. For this to work better, the software needs a few more smart features and better texture mapping.
Full Body Scanning is technically possible, but requires even more practice and patience than Bust Scanning. Dark fabrics like jeans are a challenge for the sensor and may force you to scan from close distances that are neither practical nor comfortable for scanning people.
So the Sense 2 is at its best when scanning objects. But if that’s what you plan to do, it’s currently good to ask yourself why you want to do that with an infrared depth sensor. You can achieve similar or better results through Photogrammetry — a technology that required just a digital camera or a smartphone.
As always, I hope you liked this post. If you think it’s also useful for your friends and followers, please share it! And if you want to be the first to know when new posts are live you can follow me on your favorite social network: