I would like to know what is at stake here?
L* vs what, some fixed gamma encoding, for RGB images?
Chris Murphy started by arguing that L* was wrong for reasons he deemed
important. Then Florian Höch responded. Jan-Peter chimed in. Now Lars and
lastly, Scott.
Are we arguing how many ICC profiles fit on the tip of a hairpin or is the
difference of "transfer function" encoding (I'm gamma or L* agnostic)
really
have profound image reproduction impact?
I ran a series of press tests lately using the roman16 images. All placed
natively in InDesignCSx, in eciRGBv4, converted on Export to PDF to my
favorite output profile, made into plates and ran at normal density on
coated paper. I didn't see any artifacts in the printed output, neither my
colleagues. Does it really matter how the images start their encoded life
when all subsequent transforms are meant to map their lightness in some
optimal fashions through the PCS in the presence of preffered rendering (all
profilers use some form of subjective rendering, even in colorimetric)?
Roger Breton
Hi Lars,
After the firestorm of controversy my ISO standards document has
created in the US this artwork of Cai Guo-Qiang sums up how I feel at
the moment. The tiger on the left is me, notice arrows are in my back.
The Tiger on right is USA arrows into the ECI/ISO tiger. The question
is why all the violence?
But seriously, thanks for contacting me. I am glad to speak with
someone from Adobe on this topic. Several people have pointed out
quite bluntly some of my reasonings in the document are flawed
however, your discussion regarding HDTV does not seem to directly
relate to the end user experience that we were testing. This was
specifically high-end digital still cameras, off-the-shelf Macs and
PC's running Adobe Photoshop, Calibrated displays (1.8/2.2/L*), ink
jet printers, proofers, and printing presses using ICC color
management and the various working spaces.
After taking the same exact calibrated digital captures and targets
though various capture applications to 16bit tiff files of various
working spaces, the results were improved using the L* based eciRGBv2
working space. To be clear, as the document states, the differences
were minimal, but nonetheless they are apparent to all who have
reviewed the work. The testing also indicated that the press dot gain
and tonal compression were areas to investigate further. The document
was not put out to discredit any person of company, but it was an
attempt to take a look back at how we came to where we are today, and
where we are going in the future when it comes to digital workflow and
standards.
I did not create the eciRGBv2 working space nor did I vote on the ISO
standardization, but every test I have performed indicates that it
(and especially the L*) takes care of problems that have confused
users for years. What gamma do I calibrate to? What working space
should I use? How do I expose a digital camera? These are very simple
questions to answer if we agree on the fact that the only way humans
can interact with digital information is through our analog senses
(until someone invents a direct neural implant-welcome to the
matrix!). The ISO adopting these standards will go a long way to help
the community agree on the tonal relationships of a digital image and
will go a long way to stabilize the foundation of what we consider a
digital image.
Try a simple experiment. Send 100 random photographers a simple Kodak
Q-14 scale and ask them to photograph it and send you the files. What
do you expect to get back? I expect that you will end up with a bell
shaped curve meaning that the process is pretty much chance-governed
(I learned this at RIT as a student years ago). But this is a shame
because with digital cameras and great tools like Camera RAW users
should be able to achieve much better results with any DSLR or studio
digital camera. Now do the same test, but tell the user to select
eciRGBv2 working space (even though they currently cannot in ACR) and
to expose the chart to a middle gray value of 128 and neutralize. The
results would begin to look much better. Lastly, ask them to select
eciRGBv2 working space, then to create a custom ICC profile. The
consistency would be better yet with just a few simple steps. Of
course there is no built-in profiling in ACR, but just imagine the
possibilities. If Adobe is not interested in this level of
granularity, just let X- Rite or others provide plug-ins for this
capability. The bottom line is that users can have a fighting chance
to create fantastic repeatable images with only a few simple steps.
This would far easier than the current slider based camera calibration
functions in ACR. Of course any working space could be used, but the
various gamma gradations of each working space make RGB readouts
confusing. 118 for Adobe 100 for ProRGB etc. This is why the eciRGBv2
makes so much sense to me. there is a 1:1 relationship form capture to
working space to display at the core. In 8 or 16 bit the less tones
are altered the better. No one has been able to clearly articulate a
good argument for AdobeRGB other than that it is widely used and that
is bears the name Adobe.
What can Adobe do?
If Adobe simply added support for eciRGBv2 in Lightroom and
distributed it's own internal space of lightroom (supposedly a linear
version of ProPhotoRGB) users could make their own choices as to what
standards they wish to use and Adobe could be considered a participant
in the worldwide movement towards open standards. It would also be
helpful for Adobe to decide to use similar RGB readouts in all
applications and to support custom camera profiling. These few
additions are all quite easy to integrate for Adobe, but will go far
to help the user experience.
Your email is very thought provoking and I appreciate the insight,
but what are you suggesting is the ideal environment? Adobe RGB? sRGB?
XVYCC? or is it Bruce RGB?
As far as the $30,000 HDTV display goes, I suggest that one would
simply calibrate it to L* so the image will match the same image
displayed in Adobe Photoshop on your Mac or PC calibrated to L*. It's
just so simple.
You should read some of the early works of Albert Munsell (see
attached 1901 patent) I feel that photographic exposure is closely
linked to the LAB model and both are closely linked to human
perception. If I build a replica of this device, I can almost
guarantee that if I put a 50L patch in the unit, the needle will line
directly up with the middle of the LAB scale. In terms of output for
museums, we would love to see exact copies or at least have the
opportunity to experience what that looks like! If a user sees exact
copies of artworks and is not happy the user can use the extensive set
of Adobe tools to take it wherever they choose. I do not know if the
goal of standards or software should be to improve on reality as
baseline, I think users are more than capable of making those creative
decisions from a well-defined consistent starting point.
I f you really honestly care about this topic and want to help, ask
John Nack if you could get on a plane to New York and and spend a day
with me at the Met Museum. I guarantee that if we met in person with a
few folks from Adobe, you will see right away what we are striving
for. My only goal has been to open a dialog. Boy have I opened a dialog!
Thanks,
Scott
On Mar 11, 2008, at 10:29 PM, Lars Borg wrote:
> Scott,
>
> L* is great if you're making copies. However, in most other
> scenarios, L* out is vastly different from L* in. (This is sometimes
> known as rendering.) And when L* out is different from L* in, an L*
> encoding is very inappropriate as illustrated below.
>
> Although it's sometimes meaningless to use L* for colors on set, let
> me provide an example for video. Let's say you have a Macbeth chart
> in the studio and it's lit perfectly for this case, so L* measures
> 96 for the whitest patch. On set, the six gray patches would measure
> around L* 96, 81, 66, 51, 36, 21. (Note that I didn't tell you the
> color temperature of the studio lights.)
>
> Assuming the camera is Rec.709 compliant, using a 16-235 digital
> encoding, and the camera is set for the exposure of the Macbeth
> chart, the video RGB values would be 224,183,145,109,76,46.
>
> Displaying this video on a calibrated $30,000 reference HD TV
> monitor should reproduce these patches at L* 95.5, 78.7, 62.2, 45.8,
> 29.6, 13.6.
> If say 2% flare is present on the monitor (for example at home), the
> projected values would be different, here: 96.3, 79.9, 63.8, 48.4,
> 34.1, 22.5.
>
> As you can see, L* out is clearly not the same as L* in.
> This is sometimes called the system gamma, and this is a required
> feature of the video reproduction pipeline, and follows
> international standards from ITU and IEC.
> Except for copiers, a system gamma greater than 1 is a required
> feature (for user acceptance) for all image reproduction systems
> aiming to please human eyes. For example, film still photography has
> a much higher system gamma than video.
>
> Now, if you prescribe an L* encoding for the video, which set of
> values would you use:
> 96, 81, 66, 51, 36, 21 or
> 95.5, 78.7, 62.2, 45.8, 29.6, 13.6 or maybe
> 96.3, 79.9, 63.8, 48.4, 34.1, 22.5?
> Either is wrong, when used in the wrong context.
> If I need to restore the scene colorimetry for visual effects work,
> I need 96, 81, 66, 51, 36, 21.
> If I need to re-encode the HD TV monitor image for another device,
> say a DVD, I need 95.5, 78.7, 62.2, 45.8, 29.6, 13.6.
>
> In this context, using an L* encoding would be utterly confusing due
> to the lack of common values for the same patches. Video solves this
> by not encoding in L*. (Admittedly, video encoding is still somewhat
> confusing. Ask Charles Poynton.)
> Similar examples can be made for print.
>
> When cameras, video encoders, DVDs, computer displays, TV monitors,
> DLPs, printers, etc., are not used for making exact copies, but
> rather for the more common purpose of pleasing rendering, the L*
> encoding is inappropriate. And as noted above, this is reflected in
> current international standards.
> Further, as most combinations of displays and printers (regardless
> of TRC) make for very poor copying systems (they mismatch in gamut,
> dynamic range, viewing conditions, etc.), the L* encoding remains
> equally inappropriate here.
>
> Maybe you are constructing a copying system?
>
> Lars Borg
>
> --
> Lars Borg
> Principal Scientist
>
> Adobe Systems Inc.
> 345 Park Avenue
> San Jose, CA 95110-2704
> Phone 408 536.2723
> Fax: 408 537.4068
> borg(a)adobe.com
>
http://www.adobe.com