You are here

7 posts / 0 new
Last post
jpegs or tiffs (scanned from negatives) #1
Rolf Schmolling's picture
by Rolf Schmolling
January 19, 2012 - 8:13pm

Hi,

would like your opinion: I am using film and post process (“digital darkroom”) in Aperture. My question concerns scanning: Since all image manipulation in Aperture is done non-destructively shouldn't the use of JPEGs as base be without loss of quality vs. use of tiffs? I am scanning with Silverfast Ai and can choose either Tiff or JPEG as result. Difference in file-size is quite a lot, a tiff (300dpi; 24cmx36cm) is a whooping 40-60MB while a high-quality JPEG is like 10MB.

Thanks,

Rolf

PhotoJoseph's picture
by PhotoJoseph
January 20, 2012 - 3:32am

Rolf,

Great question. I’m going to post this on the main tips site as well and get some additional feedback rolling in on it.

The essential advantage of TIF is twofold; one, it’s uncompressed, and two, it can be higher bit-depth than a JPG. JPEGs can only be 8-bit, whereas a TIF can be saved up to 16-bit (the spec actually allows for 24-bit RGB or 32-bit CMYK).

If you were scanning from prints, I don’t believe there’d be any advantage to scanning at a higher bit-depth into TIF. But since you’re scanning negatives, if your scanner is very high quality and can scan at higher bit depths, then yes, a TIF (saved at that higher bit-depth) would be effectively retaining more of the original information.

CAVEAT: What I know about scanners you could fit on the head of a pin. So what I just said above, while theoretically correct, may in all practical terms be absolute bollocks. If a scanner of any caliber is incapable of scanning at higher than 8-bit depth, or if it’s incapable of extracting that super-fine color detail that merits a higher bit depth, then saving to a higher than 8-bit TIF is pointless. I simply don’t know what scanners today are capable of. So again, if a scanner can truly scan at higher than 8-bit depth, then by all means, saving as a high-bit TIF would give you more data. Think of it like a RAW file almost. A RAW file is more than 8-bits, and it’s only when you render it to screen that you have to reduce it to 8-bit depth. The raw data is all there; you get to manipulate it however you like to pull it into view on screen. This is the difference of shooting JPG, of course… if you shoot JPG, all you’re saving is 8-bits of data, and throwing away everything else that the sensor captured. More on accessing that higher-than-8-bit data at the end.

As far as uncompressed TIF vs compressed JPG (removing the bit-depth discussion), a quality-10 JPG is virtually indistinguishable from a TIF. Unless these are the most important images of your life, I’d say a full-quality JPG is absolutely sufficient. Also I was advised once that if you see higher than 10 quality option, i.e. quality 12, that it is not any different than 10 and just takes more space. I have a hard time believing this on molecular level, but it’s easy enough to test the different for your setup—more on that in a moment.

So in short, if you aren’t getting higher than 8-bits from your scanner, and you aren’t archiving the most important images in the (your) world, then I’d just go JPG.

How do you compare qualities? Using Photoshop it’s easy. Scan a file at the highest quality setting your scanner can muster, and then save out multiple files from the same scan. Save a TIF, a quality 12 JPG if it’ll do it, and a quality 10 JPG, and just for fun, do a quality 8 as well.

Open all the files in Photoshop, and stack them as layers in one file. Make the TIF the bottom, since that’s the “baseline” file. Save this stacked file so you can easily revert to it.

Turn off (hide) all JPG layers except the quality 8. Set the mode to “difference”. You will see an all-black image. If there’s any difference between the original and the JPG, you’ll see it here in the form of colored pixels or artifacts (to see the effect in action, nudge the JPG layer one pixel and you’ll see the offset immediately. Now put it back). With the 8-quality JPG, you might see some of this color “difference” right away. But to really see it, do this:

Flatten the file (discard the other layers; you will revert to get them back). Now hit the Auto Levels command (shift-command-L I think?) The faintest speck of color will now be raised to full brightness, and the differences between the files will leap off the screen. Do it again… hit Auto Levels again and again, and you’ll see it get more and more obvious. THAT is the difference between the TIF and the quality-8 JPG.

Now that you understand that quality-8 sucks, revert the file and repeat the test with the quality-10 JPG. Can you see any difference after the first auto-levels? Chances are if there’s anything, it’ll be really, really minor. Can you live with that? Probably. If not, try quality 12, but again I doubt you’ll see much — if any — difference showing up at all.

Let me talk a moment about the working bit-depth. Aperture is working non-destructively, as you know. And what you see on screen is only 8-bit, because that’s all that our screens can display. However the mathematical calculations that are happening to modify your image are being run in floating-point bit depth, which is to say, from my limited understanding of the topic (and here I really may be showing how little I know), you’re working in a virtually unlimited color space (or perhaps it’s 32-bit… which is still, you know, a lot). So if you are opening a RAW, 16-bit or 12-bit TIF file, you are opening all that data into Aperture’s floating-point bit depth to work on. You can then pull data down into the visible 8-bit space to show on screen (this is where the extended Curves view comes in handy, where you can see data beyond the 8-bit space in the Curves’ histogram). So if you do have a 16-bit TIF that actually does have more than 8-bits of data, then yes by all means, Aperture will read that and give it to you to manipulate.

Now hopefully you can make a decision that fits your scanner and scanning software, and your specific needs!

@PhotoJoseph
— Have you signed up for the mailing list?

Robert Boyer's picture
by Robert Boyer
January 20, 2012 - 5:14am

pre-applology, I am on a long conference call but wanted to add a couple of things before I forget so this may be disjointed as I am multitasking.

Background:

I still shoot A LOT of film from 35mm to 8x10. I scan a lot of it myself even though my scanners are nowhere near as good as can be had… I was probably one of the last photographers to exclusively use film for fashion/commercial (hyperbole but was shooting tons of MF for commercial work up through 2007 - no digital).

Here are a couple of thoughts in random order…

If you are scanning individual frames or having it done for you for “final” output by all means scan to 16bit tiff files. The limiting factor here is your time - why bother with a JPEG as it takes a long time to scan frame by frame no matter what your output format is.

The real question is resolution - this is a tough one to crack for film. Let’s divide that into two categories. The first one is commercial drum scanners at the top of the range… bite the bullet and go for the highest res you can get. May as well only do it once. This is assuming that the operator knows what they are doing and the scanner has been setup for the film you are using to give you the best possible output. The second category is typical personal scanners - even good ones no matter if they are film scanners or flat beds. Some of them work a little differently than others but if they actually scan at a lower res than maximum you may want that for some cases. Without fail most personal desktop scanners will have some degree of grain aliasing that will cause two issues.

First issue: grain will be magnified because even fine grain “shows up” at more than one scanner pixel even if it is less than the size of a scanner pixel. I might be overstaying this but trust me. When it’s larger than one pixel the same thing happens at the “edges of the grain” making maters worse.

Second issue: The way digital resizing (as in making images smaller) works is to retain high contrast pixel relationships while chucking out low/no pixel contrast relationships. With grain (and digital noise) this makes grain stand out more when you make the image smaller (especially on low dpi devices like screens). This is exactly opposite of what happens with optical printing where printing small makes grain small and printing big makes grain big… Example say you have a perfect scanner and film that has grain that is exactly one pixel (forget about aliasing for a moment) at it’s native resolution grains will be one pixel - when you down res it grains will still be one pixel - they cannot get smaller and they will stay because the smooth areas will be the ones thrown out when down res-ing.

Hopefully that was not too confused - here is the point. Sometimes lower res scans actually look better depending on the output size and medium. Just something to think about. If you like the idea of scanning at maximum res yourself and having an all-purpose master file you will absolutely need some degree of noise reduction to remove luminance noise when downsizing images to give the same visual impression of film when printed at similar physical dimensions.

Having said all that here is what I normally do…

1 If I am shooting color roll film I have it processed by a decent lab (RPL) and machine scanned on their noritsu getting back my negs and large 8bit jpegs - For personal work and for some professional purposes these look fantastic - really. No need to tweak. If I NEED to do massive editing to contrast/color I will either do a scan myself at 16bit or send it out if it is really really important as I used to and have it drum scanned as a single frame.

2)If I am shooting real black and white - i develop it myself and scan a low res contact sheet (I usually make wet ones to) and then scan individual frames at 16bit for post - again not because every one needs it but just because I don’t want to do it again. If the negative is really important I will have it drum scanned - huge expense.

So if you just want your images quick - have the lab scan them and be happy with what will most likely look GREAT right out of the can as a JPEG. RPL amongst others does a great job with their machines (Noritsu - I don’ use the fuji) and I am done.

In a pinch if I need the images NOW I will get them done locally by a crappy walgreens tech. In most cases they will still look great (mostly noritsu machines out there now) and work fine for proofing - only issue is 99% of drug stores have the machines set up ONE way by someone else and the operator KNOWS NOTHING about how they work - the default setup for a lot of drug stores is low res 4x6 scan so while they look good in most cases the res is an issue.

RB

Robert Boyer's picture
by Robert Boyer
January 20, 2012 - 5:31am

Ps. Nice tip about the Jpeg quality comparison….

Rolf Schmolling's picture
by Rolf Schmolling
January 20, 2012 - 5:48am

Hi, thanks four your answers!
To elaborate, one part of my question comes from my different sources of digitalized images (and that is at least partially covered by what Robert posted).

I scan my self with different kind of success(es), usually I let Silverfast Ai chose a filesize from a suggested output-image-size (20cm longest side off a 35mm-negative), 300 dpi and I get a scanning dpi/resolution size which usually is less than 4800dpi which is the – official – maximum resolution of my Canon 8600f (real life according to test about 2400 dpi). This is either b&w or color negative film, I now choose rgb 48Bit color. I can save as TIFF or JPEG, choose quality, compression, the works (that’s where my question comes from). I have seen excessive grain when using (too much) unsharpening mask, usually this was not a big problem.

Then I have scans from local supermarket, smallish jpegs, ca. 1500 longest side, max. 1000Kb. well that’s enough… But not all that bad-bad: one example can be seen here: http://flic.kr/p/bfnfbM

And then I have scans from a not-yet-professional lab where scansize is about 2283x15xx pixel (color-processing); about 2MB, JPEG; example: http://flic.kr/p/b6T86X

and from a lab where I did my b&w processing (Tri-X), 2941 × 1960, rgb around 5MB JPEG. Results post Aperture processing can be seen here: http://flic.kr/p/bbDpuk

or here http://bit.ly/xP2AYc

The last images were quite alright in printing (blurb) or off my low-quality Lexmark (10x15cm).

My conclusion from what I read is, that my own scans are going to be Tiffs from now on, but I might continue not to use the maximum resolution, instead sth. reasonable (20cm longest size, 300dpi output).

Now I wonder: should I convert the JPEGs I get from different labs (I don’t have an equivalent to RPL here in Germany and I DO worry about costs) to TIFF prior to importing into Aperture?

I have to admit that I did not fully get what Robert said about Grain … ??

regards, Rolf

PhotoJoseph's picture
by PhotoJoseph
January 20, 2012 - 5:54am

Rolf,

I’ll address your last question about converting to TIF from JPEG — no, there’s no point. You aren’t going to add any additional data by doing this, and since Aperture isn’t working in the space of the file itself, but copying the file into it’s own working space, then there’s no benefit to making the JPG into a TIF.

@PhotoJoseph
— Have you signed up for the mailing list?

Oliver Jennrich's picture
by Oliver Jennrich
January 20, 2012 - 6:24am

Just because no one seemed to have commented on that - the relevant difference between JPEG and TIFF is not so much the fact that JPEG is compressed, but that this compression is lossy. The TIFF standard supports lossless compression and many TIFF files are compressed. To make matters more complicated, JPEG allows lossless compression as well, although this is used relatively rarely.

Using a lossy format for anything that will be later substantially edited is probably not a good idea, so depending on the further use of the pictures, I’d opt for a lossless format.

You may login with either your assigned username or your e-mail address.
Passwords are case-sensitive - Forgot your password?
randomness