Agisoft Photoscan (how to use)
Posted July 20th, 2016 by Warren BlythHere are my extensive (RAW) notes and tips on how to create things with Agisoft PhotoScan (i’ll also make a simple one page cheat sheet).
Basically, you need to go through 5 steps, which are conveniently located in order under the “Workflow” menu. I’m just trying to explain what I’ve learned about all the various options surrounding these 5 steps.
note: you can save your photoscan project as a .psx file at any point.
note: you can use Workflow > Batch Process to setup all the following steps in a row (nice for overnight processing)
-1. overview of menu bar icons.
– There is a button to “add photos” at top left of Workspace pane. It is next to an “add chunk” button which i’ve never used (i believe a “chunk” is a way to store multiple sets of photos without them affecting each other). You can use “Workflow > add photos…” instead of this button. (this is usually the first action i take)
– the main “Model” workspace has some useful icons across the top.
* 3 dotted line shapes you can use to select cloud points (for deletion, later)
* 2 blue arrow shapes to manipulate your bounding box area (tells program to ignore what is outside the box)
* 1 pyramid with blue arrows (turn object to be right side up)
* 2 iconsfor deleting and cropping which i never use.
* then we get 6 icons that reflect the steps you are going through. these are way to change what is show: first pass alignment points, dense cloud points, mesh as vertex-shaded | solid | or wireframe, and the final triangle shows you the mesh textured.
* then we see a camera icon, which will show you the origin of each photo you took (so you can delete and realign any that failed)
* the final icon is a quick view reset. i use this all the time.
– the photo workspace has a different set of icons.
* 4 ways to select areas on a photo (starting with dotted line box), for masking.
* 3 icons that control how your selection will affect the current photo mask.
* 5 things i never use (rotate, zoom, brightness)
* the final 3 icons control: shading (of points, after alightment -i believe), toggling display of alignment points (so you can mask them out), and reset view
0. Load Photos
I’d recommend looking through your photos before getting started. If you took less than 30 photos, the software might struggle (i believe you want 60% overlap between photos at least). If any photos are crazy blurred, I’d delete them. I’ve kept some slightly out of focus photos just because they are the only angle available (better than gaping holes in final mesh). But as you’ll find in a lot of complicated computer magic, it’s best to start with the highest quality (because headaches will just add up in each step, and you might waste more time trying to compensate for a bad start than you would have wasted by just going back and getting high quality photos to start with)
+ obscure note: “Tools > Camera Calibration” is an option if you have a weird/non-standard lens. But I believe the software can deduce your exact camera model from meta-data within the photo files. I use a Samsumsung Galaxy Note 4.
– in the Workspace you can expand a chunk, expand it’s sub folder, and then double click any photo to see it in the main area. I’d suggest looking through your photos (in some other program that lets you move through them quickly) and masking out any huge problem spots before you start aligning the photos. These would be spots where : someone walked through frame (and isn’t there in any other shots).
The software is smart enough to ignore details in the background/distance that change (smoke, clouds, water), but if something is right in the middle of your core area – for one shot (and you can’t just delete the photo entirely)- it’ll probably help the software to mask that thing out (again, if there’s a lot of these things – you might want to just go back and take new photos)
+ obscure note: if you had a stationary position (like you just turned in place, taking photos) you must move these photos to their own group and set this group as a “camera station” to help with the alignment.
+ Obscure note: the “NA” next to each photo is a code for “not aligned” (you can go back alater and right click to try an align problem photos).
+ Obscure note: in the photo pane, you can choose detail view and delete photos rated under 0.5. (select them and chjose “Estimate Image Quality”)
1. Align Photos (~20 minutes)
– Accuracy: suggest High. This will affect all subsequent steps. (each step lower than high is shrinking each photo by a factor, before considering it. “Highest” actually upscales each photo.)
– Pair preselect: speeds things up. (i think lets the software group correlated photos, so each new photo doesn’t have to be compared to everything processed photo… but it also has something to do with using a quick super low quality pass to guess a which photos are likely overlapping)
– Advanced> Key points: 70,000 is default (sets max number of points of interest to isolate across entire photo set. can set to zero to find as many as possible)
– Advanced> Tie points: 4,000 suggested (sets max number of shared points to find per photo. Set to zero to disable tie point filtering. You can also reduce this later, “Tools > TiePoints > Thin Point Cloud…” to better prepare for dense cloud after alignment is settled.)
note: I boost both of these point counts up when i have more photos than usual.
+ if you’ve set any masks on photos, you can check
Advanced: “Constrain features by mask”: makes sure none of the masked “feature points” are used in the sparse mesh construction or camera alignment. Masked areas are later ignored for dense cloud and texture generation.
note: you can make masks in some other program and import them (as alpha channel, separate image, background ref, etc.). press “esc” to clear mask from a photo.
* After it is done processing:
– Click “Show Cameras” button- if any are way off, delete them. (then you can select them and align them again.(?))
– adjust bounding box around what you want to see moving forward (crop out stuff like messy floor, or scraps of trees above). Best to make this as small as possible (so the maximum amount of your dense cloud is based on things you care about)
– you can also select and delete points directly at this step (if you don’t want them to use in calculations for Dense Mesh). This is a good thing to do for extraneous noise (points floating in the air where you know there is no relevant geometry)
Advanced note: you can use “Edit > Gradual Selection…” to filter the bad projected points based on some zany criteria (like how many photos each point needs to appear on).
Some points are clearly far away from where they should be (“reprojection error” aka false match). If you delete any of these you really should run Tools > Optimize cameras (select all?) to improve camera alignment accuracy.
2. Build Dense Cloud (~3 hours)
– Quality: medium to highest (couple minutes to an hour)
– Advanced: Depth Filtering: A depth map will be made for each photo. this controls how much to ignore fine detail (go “mild” if you require some small details. Go “agressive” if you’d rather smooth out noise)
* After it is done processing:
– select and delete points you don’t want to see included in your final mesh. (all points visible at this step, inside your bounding box, will be used to set vertices in your final mesh!)
Note: you can set a program Preference to save depth maps, which will speed up any future dense point cloud generations.
Advanced note: with “Tools> Dense Cloud > Select Points by Color” you can remove things that clearly aren’t part of your subject
3. Build Mesh (~15 minutes)
– surface type: always go “Arbitrary” (Height Field is only for voxel-like horizontal surfaces. air photos of terain. though it generates much faster).
– source data: always go dense cloud (“sparse cloud” means it would base mesh off initial tiny point cloud used to align photos)
– Face Count: depends on speed and desired detail. note that you can enter a custom amount if you want much more or less than it’s offering. Zoom in on the mesh afterwards and see if it isn’t dense enough (then check your dense cloud. if there aren’t any points in the problem area, you may need to go back and trim your bounding box down so that area will get more attention. or you may need to start over with more photo coverage)
– Interpolation: This controls hole filling. Enabled is best. Disabled means no holes will be filled, and takes longer to calculate (usually looks crummy). Extrapolated means it will try to guess as much as possible (usually leads to bizarre stretchy walls in areas you don’t want)
* After it is done processing:
you can select polygons and delete them.
Advanced note: you can use “Edit > Gradual Selection…” to select floating wisps of garbage (using “connected component size”), or ridiculously over sized polygons (like from extrapolated interpolation, using “Polygon Size”). The “size” is a percentage of entire model size. Also, you can expand selection with pageUp (+shift)
4. Build Texture (~5 minutes)
*if you used any questionable (slightly blurry) photos, you should View the Photos pain, and disable them before starting this calculation.
– Mapping Mode: “Generic” is best (makes no assumptions about subject). “Adaptive orthophoto” mostly values flat planar terrain, but will separate out UVs for vertical portions (good if you have no slopes. just strictly horizontal and vertical surfaces). “Orthophoto” is like an aerial photo (will have very poor texture quality in vertical areas). “Single Camera” will let you project texture across entire mesh from one camera’s perspective (you can pick which camera by name, after choosing this option)
+ “Keep UV” must be used if you want to preserve the UVs you assigned in some other program (when reimporting to slap texture on)! This will also save tons of time if you want to try out different blending modes.
– Blending Mode: Controls how overlapping photo areas are mixed, per pixel.
Use “Mosaic” (averages low frequency photo elements, while using high frequency is taken from most relevant camera)
“Average” evenly mixes everything (likely losing fine details).
Max/Min Intensity will use pixel from whichever overlapping photo has the max/min value for that pixel.
“Disabled” (used most relevant camera for each pixel… tends to feel very rough. no anti aliasing feel.)
– Texture Size: make sure you use Power Of Two dimension (2048 is my max, if heading towards games or online viewing)
– Advanced: Color Correction: this will try to make all photos have same color values, even if brightness changed a lot. Takes a LOT more time. i’d only do this if you really couldn’t deal with results.
5. Export
– you can upload directly to sketchfab if it all went perfectly. convenient!
– I usually save as wavefront OBJ, which generates a mesh fil and a texture file that I can import into many other programs for cleanup or background use.
– Once you lean up the mesh in this file, and/or fix UVs, and then reimport it into PhotoScan to generate a new texture. But only if you didn’t change the orientation of the mesh! If it’s upside down or tilted, you should fix this orientation in PhotoScan before exporting. I’d recommend using Autodesk MeshMixer for cleanup (good tools for fixing holes, and paintbrushes for interactive smoothing while retopologizing. and more).
“Tools > Import > Import Mesh” is how you bring in an external mesh (supports: .stl, .fbx, .obj, and more). Usually you import a mesh so you can project textures onto it from your aligned photos.
note: some people advise creating a new chunk before importing, so you don’t overwrite your existing work (but I usually don’t save after importing, since i’m just there for a one-off texture).
Lingering Questions:
1) you can export points in many formats… so can some other program offer better meshing from dense cloud?
2) how align chunks? (there is a chunk alignment icon i’ve never used)
3) Others have chosed adaptive orthophoto for texture mapping. Should check it’s UVs (mosaic is a mess. so if A.O. isn’t horrible, maybe it’s a better starting point?)
(you can check UVs under tools)
4) you can generate masks from a generated 3D model? so you could start over with exported OBJ and use it to tighen everything up?
5) you can export stiched panoramas from a camera station group? (try it!) (can also export images with lens distortion removed).
6) should you move all un-aligned photos to their own chunk? see if you can align them back in later?
Ref:
this 13m vimeo was very helpful:
(and this 11m follow up helped with cleanup tips in Autodesk MeshMixer https://vimeo.com/123702711 )
Tags: 3D modeling, Agisoft, Photogrammetry, PhotoScan, texturing
November 2nd, 2016 at 6:12 am
As someone currently learning Agisoft Photoscan on their own these are some great notes. Especially the “Obscure Note” stuff. Thanks!
November 8th, 2016 at 7:45 pm
Thank you for the tips. Having better results because of it.
October 21st, 2017 at 5:07 pm
Hey there, this is nice guide about Agisoft Photoscan. I am shifted towards this software about 2 weeks ago. I get this from softlay.net and searching about howtos