When I first started to become interested in photography, I followed the typical beginner approach of having a date and event based folder structure that would host my RAW and processed images. I was generally disappointed in this approach. It was difficult to locate specific images, and impossible to differentiate edited and unedited images.
I soon upgraded to Apple Aperture, and then to Lightroom which provided great improvements over the ad-hoc directory system, but I still found my system troublesome. My library had grown to several thousand images, and I was still having trouble identifying which photos needed work. Further, I found Lightroom’s approach of layering non destructive edits over raw files brittle, fragile, and I had concerns that it locked me into Adobe’s toolset.
Last year, I enrolled in Dave Morrow’s Landscape Photography Journals course with the aim of improving my Photoshop skills. Unexpectedly, I was introduced to his thoughtfully designed workflow that allowed easy insight into the progression of edits. It also ensures raw files remained untouched and edits baked into .tiff files ready for archiving into his portfolio. I immediately adopted his workflow, but after experiencing technical problems and awful customer services experience from Adobe, I decided to rework it to use Capture One(C1), Affinity Photo, and also tailor it somewhat to suit my specific needs.
Before embarking on this redesign, I identified 5 important principles that would guide this design:
- Make work visible and ensure that any time, I can easily identify those photos that need work, and at what stage of the processing pipeline they’re at.
- Optimise the workflow to maximise throughput, whilst balancing opportunities to do creative work when I’ve sufficient creative energy and the tools and environment needed. These tools include my colour calibrated monitor in a dark room and graphics tablet. When in less ideal creative environments, I still want to be able to efficiently cull, keyword, or publish these photos online.
- Increase editing quality by embracing detachment and objectivity by incorporating a long review process.
- Simplify the data model by baking edits and metadata into the final processed photo files. Follow Dave’s advise of keeping ALL archived photo files in two main folders: originals and complete. This provides simple backup, security, and also eliminates vendor lock in.
- Overlay keywords onto images using digital asset management (DAM) software to allow taxonomy restructuring, and to lean heavily on smart albums rather than physical folder structures to allow effective navigation through my archive.
The new design
My workflow consists of 8 steps across two phases, and can be illustrated with two screenshots respectively.
The first phase covers 5 steps including ingestion, creative editing and baking archivable .tiff files with embedded metadata. Progression through these stages occurs by moving sets of raw files, or C1 sessions from folder to folder.
Backup of raw files to my direct attached storage (DAS) is automated through the import functionality within C1.
Progression through stages 6 and 7 are managed by moving images between catalogue albums, and step 8 archives the high quality images to my DAS.
Step 1 – ingestion and culling
I believe improving my photography will ultimately require me to slow down when behind the camera. I practice techniques such as visualisation, deliberate composition, and exposing to the right using a live histogram to maximise image quality and minimise the number of shots taken to get a keeper. Not taking blurry photos with poor composition and exposure is the most efficient way of reducing culling time.
However, it’s doubtful that any amount of improvement will ever see a 100% keeper rate. Indeed, with some kinds of photography and situations, rattling off tens of rapid fire shots is the way to increase the chances of one sticking.
Culling will always be part of my workflow and it’s typically mundane and frustrating, choosing between 8 almost identical images is like choosing your favourite child. Eager to minimise this pain, I invested some time evaluating Photo Mechanic and FastRawViewer, two tools that provide extensive culling features. I discovered a brilliant tutorial on FastRawViewer and its objective culling process workflow. It covers exposure statistics, focus peaking, and fast shadow and highlight recovery to help establish the best ‘keepers’ and I was smitten.
My culling process is pretty simple. I plugin a loaded SD card, and within FastRawViewer, I tag each photo as Reject, Second, or Select using the above features where needed. Once, I’ve reviewed the seconds, I’ll simply press ‘C’ to copy the raw files into my
~/photography/2_geotag folder ready for step 2. The
1_ingest_and_cull folder is used only when processing my previous photo archive.
Step 2 – geotagging
Once I’ve copied a batch of photos into my
2_geotag folder, I load them into HoudahGeo and geotag them. I take the majority of my photos during hill walks, and I typically record the walks using Strava. This allows me to export tracklogs which HoudahGeo can easily load and auto tag photos based on time.
I choose to stamp the geolocation data into Exif headers within the raw files because life’s too short to worry about XMP sidecar files falling out of sync.
Step 3 – creating
After learning intermediate Photoshop for photography, I have recently switched to C1 for the bulk of my editing. I’d heard great things about it’s RAW conversion quality, and I thought it’s all in one solution to editing, conversion and digital asset management is well thought out. There are some great tutorials on their website, and these introduced the amazing colour tools it provides.
I originally designing this workflow around combining C1 Sessions for the creative steps and an archive Catalogue. However I discovered limitations about the cataloguing metadata export support, switched to Lightroom, and subsequently raised a feature request with Phase One, C1’s developers. If and when they implement it, I’ll most likely switch back over to the C1 catalogue.
Finally, I sometimes use Affinity Photo if a particular image requires extensive pixel editing, tone mapping, or stitching panoramas. C1’s Sessions provides great interoperability with Affinity Photo.
This step can further be broken down into 3 sub-steps:
- I create a C1 Session in 3_create with an appropriate title. I then import the raw files from the
2_geotagdirectory which will copy the files into the session’s capture folder. I have also configured it to automatically backup the original files to my DAS, apply basic metadata, and rename each file to conform to my naming convention
- As each photo has already gone through my culling process, I typically have only ‘keepers’ in these session, so I use the capture folder to hold images that require editing, and the selects folder to hold images that are being or have been edited. I will move photos one by one into the selects folder and use C1 and possibly Affinity Photo to crop, fix exposure (I typically Expose To The Right), colour tone, apply black and white conversion, and any other editing required. I’ll often create variants as I might want to produce several edits from one original RAW (colour, black and white, different crops).
- Using this flow means it’s easy to understand when I’ve completed the creative process on this batch – I’ll have an empty capture folder, and perform a quick review of those in selects. Once I’m happy with this review, and have made any resultant amendments, I’ll move the Session from
Step 4 – stew and review
In this step, I try to improve image quality and often subtly of edit by performing a final review. If this review happens too soon after capturing the raw images, or completing the initial edit, this review will lack objectivity. I’ll be in the same mood when reviewing as when I did the edit. I might still be high on adrenaline from getting ‘the shot’. I learned from the Dave Morrow course, that leaving the image for a moderate time unseen, and then reviewing again brings detachment and objectivity. Using this technique, I can almost see how other people would view my edits, and this process provides feedback.
When moving the C1 session into
4_stew_and_review, I rename it by prefixing the date and time of the move. I use this date to ensure I ‘stew’ the images for at least a month before doing my final review. Prefixing the date to the folder also allows me to easily sort the folder alphabetically and choose the most ‘stewed’ folders to review when I’m in the mood.
I’ll always use my pitch black home office with colour calibrated monitor to improve the review process and often make amendments based on the objectivity brought about by this stewing process. If I’ve made a significant edit, I might reset the dates to stew another month, but typically once I’ve done the final review, I’ll progress the Session to the next step by moving it to
Step 5 – apply metadata and bake archivable final images
Proceeding this step, I’ll have a C1 Session in my
5_metadata_and_bake folder with completed edits that have gone through my final review. Unless the images required round-tripping to Affinity Photo, the final images will still be the original raws files with C1 edits overlaid.
When I used to use the Lightroom non-destructive approach for editing and long term cataloguing, I would archive the resultant pairs of Raw file and XMP sidecar files contained for each image. This would allow future edits to start where the previous edit finished. However, I worried about keeping the sidecar file in sync, and also worried about tool specific content in this file meaning I’d be locked into Lightroom.
To mitigate this, I now bake the edit into 16 bit tiff files using C1’s Process Recipes. Tiffs are widely support, offer great metadata tagging support, and are lossless. Unfortunately, they are quite big but this is a price I’m happy to pay to ensure I’ll still be able to use most editing software in 15 years to open these archive files.
I consider these tiff files readonly, and thus I want to embed the kind of metadata that won’t change directly into the file like IPTC title and description. In this step, before running the C1 process recipe, I’ll fill in the appropriate metadata fields. I don’t bake keywords in the file however. I’ll continue to evolve my keyword taxonomy long into the future and don’t want to have to update all the files in my archive when doing so.
Step 6 – overlay keywords
Phew, nearly there… At this step, I’ve got a C1 Session sitting in my
5_stew_and_review folder containing final edits with metadata applied, and I’ve ran C1 process recipes to bake these edits into my final archivable images in the Session’s output folder.
I now need to overlay keywords over these images before publishing and cataloguing, but I want this metadata to remain solely in a DAM tool. As I mentioned earlier, I currently use Lightroom for this role. I open my main Lightroom catalogue and import the files from the session’s output directory. I configure Lightroom to automatically copy the files into a
local_complete folder where all my files waiting to be archived to my DAS sit. Further, I configure Lightroom to automatically add the images to an
overlay keywords collection on import. From this point on, progression through the workflow uses Lightroom collections rather than physical folders.
I use Lightroom’s excellent keywording tools to overlay keywords to the image and at the same time evolve my taxonomy. I keep a checklist of ways to keyword to apply. This might seem over the top but it allows keywording to become almost mechanical, meaning I don’t have to apply too much mental focus in what is a pretty uncreative task.
I then delete the C1 Session and all its contents! The original raw files have automatically been backed up to my DAS, and I’m not interested in picking up the edits where I left off. I’ve gone through a significant review period, and if in the distance future I want to re-edit files with improved skills and experience, then I’d probably be better starting off from scratch with the raw file.
Finally, I move the keyworded images from the
overlay keywords collection to
Step 7 – publish
By this stage, my main Lightroom catalogue contains my final edits with baked in metadata and overlaid keywords. While the archivable files still sit on my laptop hard drive, I use Lightroom plugins to publish my files to their online homes. I export all interesting photos to Flickr, and use my portfolio hosted with SmugMug site to showcase my best work.
I maintain my keyword taxonomy to ensure private tags including friends and family names etc or omitted from this publishing process, and also ensure other metadata is also removed. Limited functionality in this specific area has forced me to use Lightroom for cataloguing instead of my preferred C1 Catalogues.
As this step is discrete – separate from the proceeding and following steps, it allows me to buffer or queue publishing to Flickr. This is useful as I frequently publish photos to groups which have daily limits, so it’s useful to upload one photoset per day.
Once I’ve published to the various destinations, I once again move the images from my
archive to DAS collection.
Step 8 – archive
I really don’t want to lose raw or editing images. My photo collection currently runs to about 1TB and my laptop’s hard drive is also 1TB. Thus I needed a robust and scalable local storage solution, so purchased a TerraMaster TD2 with Raid 1 configured hard drives. It connects to my laptop using Thunderbolt so file transfer is as fast as rusty disks will allow.
I use this DAS as my primary file archive. I have two folders on the drive, one to hold all my raw images, and the other to hold all my tiff files. This folder structure takes no effort to maintain or backup to a variety of external hard drives incase the DAS is stolen or we suffer a fire.
As it’s only a DAS rather than NAS, it does mean I can’t easily transfer the complete images to it from my laptop via Wi-Fi. This is why I have an explicit step in my workflow where I move files from
local_complete on the laptop to the DAS when I have to walk upstairs and plug the laptop directly in to the TD2.
Phew, it’s done!
Well, writing that post too way longer than I expected. The workflow I describe could be seen as fairly complex. I’m not actually sure it is, it just makes various aspects of the editing and cataloguing process explicit. It allows me to work on photos based on mood and available resources. I can do creative work when I’ve the energy and am sat in front of my calibrated monitor. It allows to apply keywords in an almost mechanical process whilst absorbing Coronation Street in the background.
Apart from the inherent month length delay during the stew and bake step, I can push a set of images through this workflow in about an hour if only basic editing is required.
However, most of all, it’s robust. Dave Morrow describes the negative impact in creativity if you don’t trust your photo workflow and storage approach. Previously, I’ve been reluctant to invest heavily in editing certain photos as I knew my backup solution wasn’t solid. Having already lost images in the past, this fear caused procrastination. This workflow heavily influenced by Dave’s approach mitigates these concerns, and I’d recommend his courses to anyone interested in improving their landscape photography and editing skills.
I’d love to hear feedback on this workflow, and am happy to write subsequent posts diving into the steps further, or explaining the iterative process in getting the workflow to where it is now. During development of this workflow, I took the bold step of deleting all previously uploaded Flickr photos, and reprocessing my entire raw archive (15k images) using the new flow to hopefully gain all the advantages I envisage. This does mean my Flickr and portfolio are fairly bare at the moment. The workflow is working though, and I’m regularly uploading new photos especially to Flickr.