Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
http://suprasliensis.obdurodon.org/pages/supr001r.html
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly, and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
1. What's an appropriate file format and resolution (size, dpi, color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
2. What's an appropriate degree of magnification for the magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
Thanks,
David djbpitt@pitt.edu
On Mon, 18 Jun 2012 "Birnbaum, David J" djbpitt@pitt.edu wrote:
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
Have you considered embedding images in the DjVu format?
I guess it would not be possible to automatically resize the image, but there would be some other advantages.
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly,
Even big DjVu images should load quickly, that's what the format has been designed for.
and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
DjVu allows both lossy and lossless compression.
- What's an appropriate file format and resolution (size, dpi,
color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
- What's an appropriate degree of magnification for the
magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
Using DjVu allows the user to select the size and magnification of the magnifying glass.
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription,
Djvu allows to store the transciption as "hidden text". You can view it with the original scan as background, but the fragment pointed with mouse may be made visible also in the status line.
verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
The DjVu viewer allows user to choose the magnification of the image.
Of course using DjVu has only some drawbacks, like the need to install the DjVu viewer and browser plugin.
Best regards
Janusz
On Tue, Jun 19, 2012 at 5:35 AM, Janusz S. Bień jsbien@mimuw.edu.pl wrote:
On Mon, 18 Jun 2012 "Birnbaum, David J" djbpitt@pitt.edu wrote:
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
Have you considered embedding images in the DjVu format?
I guess it would not be possible to automatically resize the image, but there would be some other advantages.
I'd worry about suggesting DjVu format because it requires as special plugin to view these files. Such helpful things as image magnification where possible should be built on top of a display which works perfectly fine without plugins, or indeed javascript, in a form of progressive enhancement.
But that said, another option would be to implement an actual pan/zoom interface rather than just magnification viewer. I've used both the google maps api and openlayers to do that in the past, however both require javascript.
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly,
Even big DjVu images should load quickly, that's what the format has been designed for.
That would also be a benefit of a full pan/zoom interface, in that the initial file that is loaded can be fairly low resolution.
and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
DjVu allows both lossy and lossless compression.
I don't think lossy compression like jpg or png is problematic in this kind of case (after all, you could provide a link to download the uncompressed full resolution version of the file if licensing allows).
- What's an appropriate file format and resolution (size, dpi,
color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
I would go for consistency and so regenerate at your current lowest size and resolution, and see how bad that looks. (Again, not a problem with a full pan/zoom interface).
- What's an appropriate degree of magnification for the
magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
Using DjVu allows the user to select the size and magnification of the magnifying glass.
Whatever solution you use it would be best, in my opinion, if the magnification was consistent across the edition. Not consistent percentage of magnification of the actual file size, but that you saw a consistent amount of text in the magnification window. In comparing the two, I found 284r shows a bit too much and 30v shows a bit too little, for what its worth. Though I'd tend towards the larger 30v if not giving a way to zoom in the image because the point of the magnification is to let a reader see where you might have gone wrong.
is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
I'd generally agree with that. You want to be able to see individual letters/strokes if possible, but don't want it to be so slow that the magnification window is problematic to use.
Of course using DjVu has only some drawbacks, like the need to install the DjVu viewer and browser plugin.
That is a very big drawback. (For example, why I wouldn't suggest zoomify as a pan/zoom solutions since that would require flash. Javascript is at least fairly standard these days).
Just my thoughts.
-James
Dear David,
In the Online Froissart we use small JPEGs at screen resolution (calculated assuming the highest resolutions on common commercially available systems, see for example http://en.wikipedia.org/wiki/Display_resolution).
For more sophisticated image viewing, which allows zooming, we use the Virtual Vellum manuscript viewer, which was developed for our project (see http://www.hrionline.ac.uk/onlinefroissart/apparatus.jsp?type=vv). Is is written in Java, so requires Java to be installed on the machine used to access the images, but most computers these days have this. The obvious advantage is that it will run on any system that can run Java. Our default full image resolution size is 600 dpi (relative to the original MS, which means 150 MB in TIFF and about 6MB in slightly lossy JPEG2000, although some image collections provided by partners are at 300 DPI).
For our project we deploy the viewer in a separate window, but it can also be embedded in browser pages (see http://cbers.shef.ac.uk/manuscripts/index.html). Apart from possibilies of zooming and displaying many images side by side (in stand-alone windows only), Virtual Vellum can measure areas of the page (size of initials, colums etc.). It is also possible to display transcription (and even translation) as a separate layer within the Virtual Vellum viewer (check this out on Besancon MS 865). Transcription (and translation) will respond to changes in the images and therefore zoom in and change view when you change the main image view.
The native image format used by Virtual Vellum is JPEG2000, but the software comes with a utility to create JPEG2000 libraries of images from several input formats (including TIFF and JPEG).
Virtual Vellum is freely available for non-commercial use by single users or projects. It can be used on local images collections or embedded as a Java-applet for web display (for links to documentation and downloads, see http://www.shef.ac.uk/hri/projects/projectpages/virtualvellum).
Godfried Croenen
-- Dr. Godfried Croenen Department of Cultures, Languages and Area Studies University of Liverpool Chatham Street Liverpool L69 7ZR United Kingdom Tel: +44 (0)151 794 2763 Fax: +44 (0)151 794 2357 e-mail: G.Croenen@Liverpool.ac.uk Personal webpage: http://www.liv.ac.uk/~gcroenen/index.htm Online Froissart: http://www.hrionline.ac.uk/onlinefroissart/
-----Original Message----- From: dm-l-bounces@uleth.ca [mailto:dm-l-bounces@uleth.ca] On Behalf Of Birnbaum, David J Sent: 19 June 2012 01:42 To: dm-l@uleth.ca Subject: [dm-l] best practice: photographc facsimile?
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
http://suprasliensis.obdurodon.org/pages/supr001r.html
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly, and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
1. What's an appropriate file format and resolution (size, dpi, color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
2. What's an appropriate degree of magnification for the magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
Thanks,
David djbpitt@pitt.edu
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Hi David
This is a really good question. I came into academia with an established background as an digital imaging scientist and as far as I can see at the moment, there are no real standards for image digitation or reproduction in the academic world that can be used effectively by scholars. For example TEI says a lot about how you reference your images but (unsurprisingly) doesn't attempt to give any guidance on how you get them into an appropriate resolution or colour depth in the first place. JISC also give some guidance here but many of their project specifications often seem to favour 4800dpi in 24bit colour. Not very helpful if (for example) you are scanning First World War black and white photographs which have a relatively low lines per inch resolution (search for lpi to dpi calculators online) or if you are using a scanner which has a maximum optical resolution of (say) 2400dpi and everything else is interpolated resolution. These things will of course differ from project to project which is why it is so difficult to lay down accurate guidelines.
I ran a workshop at Kalamazoo which dealt with some of these issues ('Digital Imaging for Medievalists'). Interestingly enough I scanned and printed some conventional (pre-digital) black and white and colour photographs at various resolutions and colour depths and not one person out of the 30 people who came to the session correctly identified which ones were which out of each set. In my view, what we really need are some standards and accurate guidelines and I am researching this area at the moment. For example, not many people realise that some pre-digital transparencies are between 2,000 and 4,000 lines resolution (lpi) which puts many digital cameras in the shade.
Moving to your question, I think the important thing to understand is that screen resolutions are still extremely low when compared with (say) printer resolutions. For example a typical laptop screen at 1024x768 is between 75dpi and 100dpi depending on the physical screen size. Many modern mobile phone/smart phone dpi tends to be around 150dpi. Of course these are all generalisations and there are many web based calculators such as http://tiporama.com/tools/pixels_inches.html which can give you a better idea of resolution capability on a device by device basis. Compare that with an EPSON printer at 1440dpi horizontal resolution and you will understand and appreciate the comparison. So it is important to understand that the image you display at any one time on the screen can be relatively low (for performance reasons), but the underlying source image (the original scan) needs to be high and lossless to allow the best level of zoom as you move closer and closer into the page. As a palaeographer, I want to be able to see 'the big picture' (i.e. the whole page) at any one time but then I also want to be able to zoom into sections of the manuscript and then down to the individual characters (without pixellation) to determine if that character really is an "S" or an "A" (palaeographers will get the joke here). The problem with most lossy compressions is that the 'big picture' is great but, as you zoom in, you start to see significant degradation of the image (a by-product of the way the compression is done) which gets in the way of scholarly interpretation. For example take a high resolution TIFF image, compress it at various JPG levels and then print it full page on a high resolution printer. You will find that all the samples are virtually indistinguishable from each other until you bring out a magnifying glass and then all will become depressingly clear in terms of the drop in image quality/fidelity at the lowest file sizes.
As other contributors have indicated, there are various solutions (e.g. http://djvu.org/resources/whatisdjvu.php and Virtual Vellum) which can also help, but the resolution, compression and colour depth of the underlying image is still key. As James implies, it is also important to select a software solution that has longevity and is not going to disappear or change once your research project is completed. There are a number of Flash based viewers around but (as James also implies), Adobe's track record at maintaining backward compatibility (and even retaining technology) has not been that great in the past. Having said that, Adobe Flash has installations that run into at least the tens of millions so it is unlikely to disappear overnight but we all know Apple's stance on Adobe Flash!
So in closing, I think the most important thing to ensure, for any manuscript project at least, is that the underlying image is at the highest resolution possible (lossless) and the capabilities to zoom into 'snapshots' of that image are flexible in terms of the zoom factors allowed. However, the image 'snapshot' that gets displayed at any one time on the screen can be relatively low for optimal performance but it still needs to be an accurate subset of the original image and as a result downscaling resolution from a lossy compressed original is unlikely to work very well. As I recall, this downscaling is something that Virtual Vellum does well as does Photoshop of course but to identify one of the many tools and plug-in's which operate in a similar and appropriate way for your particular needs and custom website will of course need your further research.
Hope this helps
Best regards
Tony Harris Kellogg College, Oxford
-----Original Message----- From: dm-l-bounces@uleth.ca [mailto:dm-l-bounces@uleth.ca] On Behalf Of Birnbaum, David J Sent: 19 June 2012 01:42 To: dm-l@uleth.ca Subject: [dm-l] best practice: photographc facsimile?
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
http://suprasliensis.obdurodon.org/pages/supr001r.html
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly, and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
1. What's an appropriate file format and resolution (size, dpi, color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
2. What's an appropriate degree of magnification for the magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
Thanks,
David djbpitt@pitt.edu
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
I am excited to see discussion on this topic. We host an online repository of images, transcriptions, and collation tools for scholars doing text criticism on the New Testament. Often, holding institutions of the ~5600 manuscript copies will not give us permission to house digital images on our own site, but insist on us feeding the images to our users from their website. This makes providing a seamless experience for our users a challenging task.
Our system is composed of a suite of OpenSocial gadgets and one of those gadgets is an image viewer which provides drag to pan, wheel (or 2 fingers up/down on a mac) to zoom, brightness/contrast adjust, box annotations, and persistent URLs back to an adjusted view. It's all javascript and requires no server side or extra client side components beyond the web browser and can view remote URLs for pretty much any image with a link on the internet. It's all freely available for your use-- easy dropin if you're working inside an opensocial container, or we have a standalone version too which is a simple popup webpage.
You can try the gadget out here:
http://ntvmr.uni-muenster.de/manuscript-workspace?docid=10001 (click on a thumbnail and it will load the full image)
and the standalone version can be seen by clicking the permalink icon ("Embeddable External Viewer Link To This Image") to generate a return link to your view of the image. (The "Discuss" button doesn't really do anything if you're not logged in, so avoid that one unless you create an account)
We also have a firefox plugin which will construct the appropriate link for viewing an image in the image viewer. You simply install the plugin, browse to any internet page with an image in which you are interested in, and then on the plugin, press "Annotate". It's crude and assumes the largest image on the page is the one you'd are interested in, but mostly does what our users expect. It can read images from some sites which use custom viewers by piecing tiles together, but mostly the images must be simple jpg, png, gif images.
Hope this is helpful. Please feel free to contact me if you have questions,
Troy
Virtual Manuscript Room Institut für Neutestamentliche Textforschung
On 06/19/2012 11:53 AM, Tony Harris wrote:
Hi David
This is a really good question. I came into academia with an established background as an digital imaging scientist and as far as I can see at the moment, there are no real standards for image digitation or reproduction in the academic world that can be used effectively by scholars. For example TEI says a lot about how you reference your images but (unsurprisingly) doesn't attempt to give any guidance on how you get them into an appropriate resolution or colour depth in the first place. JISC also give some guidance here but many of their project specifications often seem to favour 4800dpi in 24bit colour. Not very helpful if (for example) you are scanning First World War black and white photographs which have a relatively low lines per inch resolution (search for lpi to dpi calculators online) or if you are using a scanner which has a maximum optical resolution of (say) 2400dpi and everything else is interpolated resolution. These things will of course differ from project to project
which is why it is so difficult to lay down accurate guidelines.
I ran a workshop at Kalamazoo which dealt with some of these issues ('Digital Imaging for Medievalists'). Interestingly enough I scanned and printed some conventional (pre-digital) black and white and colour photographs at various resolutions and colour depths and not one person out of the 30 people who came to the session correctly identified which ones were which out of each set. In my view, what we really need are some standards and accurate guidelines and I am researching this area at the moment. For example, not many people realise that some pre-digital transparencies are between 2,000 and 4,000 lines resolution (lpi) which puts many digital cameras in the shade.
Moving to your question, I think the important thing to understand is that screen resolutions are still extremely low when compared with (say) printer resolutions. For example a typical laptop screen at 1024x768 is between 75dpi and 100dpi depending on the physical screen size. Many modern mobile phone/smart phone dpi tends to be around 150dpi. Of course these are all generalisations and there are many web based calculators such as http://tiporama.com/tools/pixels_inches.html which can give you a better idea of resolution capability on a device by device basis. Compare that with an EPSON printer at 1440dpi horizontal resolution and you will understand and appreciate the comparison. So it is important to understand that the image you display at any one time on the screen can be relatively low (for performance reasons), but the underlying source image (the original scan) needs to be high and lossless to allow the best level of zoom as you move closer and closer into the page. As a pal
aeographer, I want to be able to see 'the big picture' (i.e. the whole page) at any one time but then I also want to be able to zoom into sections of the manuscript and then down to the individual characters (without pixellation) to determine if that character really is an "S" or an "A" (palaeographers will get the joke here). The problem with most lossy compressions is that the 'big picture' is great but, as you zoom in, you start to see significant degradation of the image (a by-product of the way the compression is done) which gets in the way of scholarly interpretation. For example take a high resolution TIFF image, compress it at various JPG levels and then print it full page on a high resolution printer. You will find that all the samples are virtually indistinguishable from each other until you bring out a magnifying glass and then all will become depressingly clear in terms of the drop in image quality/fidelity at the lowest file sizes.
As other contributors have indicated, there are various solutions (e.g. http://djvu.org/resources/whatisdjvu.php and Virtual Vellum) which can also help, but the resolution, compression and colour depth of the underlying image is still key. As James implies, it is also important to select a software solution that has longevity and is not going to disappear or change once your research project is completed. There are a number of Flash based viewers around but (as James also implies), Adobe's track record at maintaining backward compatibility (and even retaining technology) has not been that great in the past. Having said that, Adobe Flash has installations that run into at least the tens of millions so it is unlikely to disappear overnight but we all know Apple's stance on Adobe Flash!
So in closing, I think the most important thing to ensure, for any manuscript project at least, is that the underlying image is at the highest resolution possible (lossless) and the capabilities to zoom into 'snapshots' of that image are flexible in terms of the zoom factors allowed. However, the image 'snapshot' that gets displayed at any one time on the screen can be relatively low for optimal performance but it still needs to be an accurate subset of the original image and as a result downscaling resolution from a lossy compressed original is unlikely to work very well. As I recall, this downscaling is something that Virtual Vellum does well as does Photoshop of course but to identify one of the many tools and plug-in's which operate in a similar and appropriate way for your particular needs and custom website will of course need your further research.
Hope this helps
Best regards
Tony Harris Kellogg College, Oxford
-----Original Message----- From: dm-l-bounces@uleth.ca [mailto:dm-l-bounces@uleth.ca] On Behalf Of Birnbaum, David J Sent: 19 June 2012 01:42 To: dm-l@uleth.ca Subject: [dm-l] best practice: photographc facsimile?
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
http://suprasliensis.obdurodon.org/pages/supr001r.html
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly, and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
What's an appropriate file format and resolution (size, dpi, color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
What's an appropriate degree of magnification for the magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
Thanks,
David djbpitt@pitt.edu
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Hi all,
I'll add my perspective from working in digital libraries and from working on the St Gall Monastery Virtual Library (http://www.stgallplan.org/). Pan and zoom is my preferrered viewing interface. We use zoomify in our application, but just because it's our best choice right now. The biggest drawback is the flashviewer. Rules out ipad and iphone users from even knowing the images exist. So we're interested in going with jpeg 2000 and we will through the islandora package soon. I'm curious to hear others findings about Jpeg2000.
Best,
Lisa McAulay Librarian for Digital Collection Development UCLA Digital Library Program http://digital.library.ucla.edu/
On Jun 19, 2012, at 5:11 AM, "Troy A. Griffitts" scribe@crosswire.org wrote:
I am excited to see discussion on this topic. We host an online repository of images, transcriptions, and collation tools for scholars doing text criticism on the New Testament. Often, holding institutions of the ~5600 manuscript copies will not give us permission to house digital images on our own site, but insist on us feeding the images to our users from their website. This makes providing a seamless experience for our users a challenging task.
Our system is composed of a suite of OpenSocial gadgets and one of those gadgets is an image viewer which provides drag to pan, wheel (or 2 fingers up/down on a mac) to zoom, brightness/contrast adjust, box annotations, and persistent URLs back to an adjusted view. It's all javascript and requires no server side or extra client side components beyond the web browser and can view remote URLs for pretty much any image with a link on the internet. It's all freely available for your use-- easy dropin if you're working inside an opensocial container, or we have a standalone version too which is a simple popup webpage.
You can try the gadget out here:
http://ntvmr.uni-muenster.de/manuscript-workspace?docid=10001 (click on a thumbnail and it will load the full image)
and the standalone version can be seen by clicking the permalink icon ("Embeddable External Viewer Link To This Image") to generate a return link to your view of the image. (The "Discuss" button doesn't really do anything if you're not logged in, so avoid that one unless you create an account)
We also have a firefox plugin which will construct the appropriate link for viewing an image in the image viewer. You simply install the plugin, browse to any internet page with an image in which you are interested in, and then on the plugin, press "Annotate". It's crude and assumes the largest image on the page is the one you'd are interested in, but mostly does what our users expect. It can read images from some sites which use custom viewers by piecing tiles together, but mostly the images must be simple jpg, png, gif images.
Hope this is helpful. Please feel free to contact me if you have questions,
Troy
Virtual Manuscript Room Institut für Neutestamentliche Textforschung
On 06/19/2012 11:53 AM, Tony Harris wrote:
Hi David
This is a really good question. I came into academia with an established background as an digital imaging scientist and as far as I can see at the moment, there are no real standards for image digitation or reproduction in the academic world that can be used effectively by scholars. For example TEI says a lot about how you reference your images but (unsurprisingly) doesn't attempt to give any guidance on how you get them into an appropriate resolution or colour depth in the first place. JISC also give some guidance here but many of their project specifications often seem to favour 4800dpi in 24bit colour. Not very helpful if (for example) you are scanning First World War black and white photographs which have a relatively low lines per inch resolution (search for lpi to dpi calculators online) or if you are using a scanner which has a maximum optical resolution of (say) 2400dpi and everything else is interpolated resolution. These things will of course differ from project t
o project
which is why it is so difficult to lay down accurate guidelines.
I ran a workshop at Kalamazoo which dealt with some of these issues ('Digital Imaging for Medievalists'). Interestingly enough I scanned and printed some conventional (pre-digital) black and white and colour photographs at various resolutions and colour depths and not one person out of the 30 people who came to the session correctly identified which ones were which out of each set. In my view, what we really need are some standards and accurate guidelines and I am researching this area at the moment. For example, not many people realise that some pre-digital transparencies are between 2,000 and 4,000 lines resolution (lpi) which puts many digital cameras in the shade.
Moving to your question, I think the important thing to understand is that screen resolutions are still extremely low when compared with (say) printer resolutions. For example a typical laptop screen at 1024x768 is between 75dpi and 100dpi depending on the physical screen size. Many modern mobile phone/smart phone dpi tends to be around 150dpi. Of course these are all generalisations and there are many web based calculators such as http://tiporama.com/tools/pixels_inches.html which can give you a better idea of resolution capability on a device by device basis. Compare that with an EPSON printer at 1440dpi horizontal resolution and you will understand and appreciate the comparison. So it is important to understand that the image you display at any one time on the screen can be relatively low (for performance reasons), but the underlying source image (the original scan) needs to be high and lossless to allow the best level of zoom as you move closer and closer into the page.
As a pal
aeographer, I want to be able to see 'the big picture' (i.e. the whole page) at any one time but then I also want to be able to zoom into sections of the manuscript and then down to the individual characters (without pixellation) to determine if that character really is an "S" or an "A" (palaeographers will get the joke here). The problem with most lossy compressions is that the 'big picture' is great but, as you zoom in, you start to see significant degradation of the image (a by-product of the way the compression is done) which gets in the way of scholarly interpretation. For example take a high resolution TIFF image, compress it at various JPG levels and then print it full page on a high resolution printer. You will find that all the samples are virtually indistinguishable from each other until you bring out a magnifying glass and then all will become depressingly clear in terms of the drop in image quality/fidelity at the lowest file sizes.
As other contributors have indicated, there are various solutions (e.g. http://djvu.org/resources/whatisdjvu.php and Virtual Vellum) which can also help, but the resolution, compression and colour depth of the underlying image is still key. As James implies, it is also important to select a software solution that has longevity and is not going to disappear or change once your research project is completed. There are a number of Flash based viewers around but (as James also implies), Adobe's track record at maintaining backward compatibility (and even retaining technology) has not been that great in the past. Having said that, Adobe Flash has installations that run into at least the tens of millions so it is unlikely to disappear overnight but we all know Apple's stance on Adobe Flash!
So in closing, I think the most important thing to ensure, for any manuscript project at least, is that the underlying image is at the highest resolution possible (lossless) and the capabilities to zoom into 'snapshots' of that image are flexible in terms of the zoom factors allowed. However, the image 'snapshot' that gets displayed at any one time on the screen can be relatively low for optimal performance but it still needs to be an accurate subset of the original image and as a result downscaling resolution from a lossy compressed original is unlikely to work very well. As I recall, this downscaling is something that Virtual Vellum does well as does Photoshop of course but to identify one of the many tools and plug-in's which operate in a similar and appropriate way for your particular needs and custom website will of course need your further research.
Hope this helps
Best regards
Tony Harris Kellogg College, Oxford
-----Original Message----- From: dm-l-bounces@uleth.ca [mailto:dm-l-bounces@uleth.ca] On Behalf Of Birnbaum, David J Sent: 19 June 2012 01:42 To: dm-l@uleth.ca Subject: [dm-l] best practice: photographc facsimile?
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
http://suprasliensis.obdurodon.org/pages/supr001r.html
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly, and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
What's an appropriate file format and resolution (size, dpi, color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
What's an appropriate degree of magnification for the magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
Thanks,
David djbpitt@pitt.edu
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gid=49320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Hi, All (and, hello to my colleague, Lisa) -
As primarily a _user_ of manuscripts and MS images online, I first want to chime in with those arguing against plugins. Although zoomify, Virtual Vellum, and DjVu all get the job done, they're not particularly pleasant to work with. (A quick glance at the OpenSocial plugins that Troy linked to below suggests they're pretty slick, and Mac/iPad usability is particularly welcome, though zoom/pan functionality didn't actually work on my iPad.) Plugins are _always_ a hassle, and after a year or two has passed, often a nightmare (e.g. the Windows-only SID plugin used by the Auchinleck project).
The fundamental issue that David raises remains unanswered, largely because there are a series of overlapping use-cases that don't neatly resolve to decisions about file format/file size/appropriate magnification, let alone against the different resolutions and dpis of screens and printers. Both f. 30v and 284r are nominally readable, although the horizontal distance between the full-text transcription and the ms image make it rather a pain in the neck. (This is, I assume, why T-PEN and Quentin Miller's Diplomat software offer transcription space just below ms lines, rather than adjacent, but as David indicates this is for transcription _checking_, the gap is an inconvenience rather than a deal-breaker.) The higher-quality image of f. 30v is vastly easier to work with, particularly with the magnifier tool, where the low resolution of f. 284r becomes noticeable. Were I transcribing f. 30v, I think the overview/zoom view are pretty much spot on - it's difficult for me to imagine scenarios when transcribing or checking a transcription that would require different levels of zoom (other than switching from a desktop to a laptop/ipad. That said, I did open the underlying image in a separate window at what it's reporting as 1970x2666, since the magnifying tool precludes that). If I were teaching with this image, however, I would want a third or fourth level of zoom: complete page, maybe 1/4 of a page, and then the max-zoom of the current magnifier. If you're not anticipating people teaching with these images, these aren't intermediary image sizes you need to provide, nor do you need to resort to more capable plug-in zoom tools.
Again, as an end-user, lossless files are unwieldy - I have plenty of 50mb tiff files scanned from microfilm on my computer that are easier to avoid, whenever possible, even with a decently recent computer. If I'm transcribing them, I'll work from perfectly readable 72dpi versions of 3325x2409 Tiff images of bifolia, and only return to the originals in rare cases. I don't mind the minor compromises of a lossy codec for the speed and convenience of a 2-3MB file, as long as you 1) warn me that's what you're providing, 2) make those sizes and zoom options consistent across a single manuscript, and ideally all of a project's manuscripts, and 3) offer options for accessing larger images (or, explain that you can't for licensing, revenue, or other reasons, cf. Irish Script on Screen or Stanford/Parker on the Web.)
So, in short, from my perspective, f. 30v is about right - you're fudging things by condensing the image into the frame on the right, which depends on the user's browser window size, and may be less readable on a laptop than the desktop I'm currently working on. The javascript magnifier is nifty, though it's a bit annoying that it can't be turned off, rather than only tucked away in a corner. And none of this brings you any closer to best practices for the back end.
Kind regards, Matthew
Matthew Fisher Assistant Professor Department of English University of California, Los Angeles
On Jun 19, 2012, at 11:22 AM, McAulay, Elizabeth wrote:
Hi all,
I'll add my perspective from working in digital libraries and from working on the St Gall Monastery Virtual Library (http://www.stgallplan.org/). Pan and zoom is my preferrered viewing interface. We use zoomify in our application, but just because it's our best choice right now. The biggest drawback is the flashviewer. Rules out ipad and iphone users from even knowing the images exist. So we're interested in going with jpeg 2000 and we will through the islandora package soon. I'm curious to hear others findings about Jpeg2000.
Best,
Lisa McAulay Librarian for Digital Collection Development UCLA Digital Library Program http://digital.library.ucla.edu/
On Jun 19, 2012, at 5:11 AM, "Troy A. Griffitts" scribe@crosswire.org wrote:
I am excited to see discussion on this topic. We host an online repository of images, transcriptions, and collation tools for scholars doing text criticism on the New Testament. Often, holding institutions of the ~5600 manuscript copies will not give us permission to house digital images on our own site, but insist on us feeding the images to our users from their website. This makes providing a seamless experience for our users a challenging task.
Our system is composed of a suite of OpenSocial gadgets and one of those gadgets is an image viewer which provides drag to pan, wheel (or 2 fingers up/down on a mac) to zoom, brightness/contrast adjust, box annotations, and persistent URLs back to an adjusted view. It's all javascript and requires no server side or extra client side components beyond the web browser and can view remote URLs for pretty much any image with a link on the internet. It's all freely available for your use-- easy dropin if you're working inside an opensocial container, or we have a standalone version too which is a simple popup webpage.
You can try the gadget out here:
http://ntvmr.uni-muenster.de/manuscript-workspace?docid=10001 (click on a thumbnail and it will load the full image)
and the standalone version can be seen by clicking the permalink icon ("Embeddable External Viewer Link To This Image") to generate a return link to your view of the image. (The "Discuss" button doesn't really do anything if you're not logged in, so avoid that one unless you create an account)
We also have a firefox plugin which will construct the appropriate link for viewing an image in the image viewer. You simply install the plugin, browse to any internet page with an image in which you are interested in, and then on the plugin, press "Annotate". It's crude and assumes the largest image on the page is the one you'd are interested in, but mostly does what our users expect. It can read images from some sites which use custom viewers by piecing tiles together, but mostly the images must be simple jpg, png, gif images.
Hope this is helpful. Please feel free to contact me if you have questions,
Troy
Virtual Manuscript Room Institut für Neutestamentliche Textforschung
On 06/19/2012 11:53 AM, Tony Harris wrote: Hi David
This is a really good question. I came into academia with an established background as an digital imaging scientist and as far as I can see at the moment, there are no real standards for image digitation or reproduction in the academic world that can be used effectively by scholars. For example TEI says a lot about how you reference your images but (unsurprisingly) doesn't attempt to give any guidance on how you get them into an appropriate resolution or colour depth in the first place. JISC also give some guidance here but many of their project specifications often seem to favour 4800dpi in 24bit colour. Not very helpful if (for example) you are scanning First World War black and white photographs which have a relatively low lines per inch resolution (search for lpi to dpi calculators online) or if you are using a scanner which has a maximum optical resolution of (say) 2400dpi and everything else is interpolated resolution. These things will of course differ from project t o project
which is why it is so difficult to lay down accurate guidelines.
I ran a workshop at Kalamazoo which dealt with some of these issues ('Digital Imaging for Medievalists'). Interestingly enough I scanned and printed some conventional (pre-digital) black and white and colour photographs at various resolutions and colour depths and not one person out of the 30 people who came to the session correctly identified which ones were which out of each set. In my view, what we really need are some standards and accurate guidelines and I am researching this area at the moment. For example, not many people realise that some pre-digital transparencies are between 2,000 and 4,000 lines resolution (lpi) which puts many digital cameras in the shade.
Moving to your question, I think the important thing to understand is that screen resolutions are still extremely low when compared with (say) printer resolutions. For example a typical laptop screen at 1024x768 is between 75dpi and 100dpi depending on the physical screen size. Many modern mobile phone/smart phone dpi tends to be around 150dpi. Of course these are all generalisations and there are many web based calculators such as http://tiporama.com/tools/pixels_inches.html which can give you a better idea of resolution capability on a device by device basis. Compare that with an EPSON printer at 1440dpi horizontal resolution and you will understand and appreciate the comparison. So it is important to understand that the image you display at any one time on the screen can be relatively low (for performance reasons), but the underlying source image (the original scan) needs to be high and lossless to allow the best level of zoom as you move closer and closer into the page. As a pal
aeographer, I want to be able to see 'the big picture' (i.e. the whole page) at any one time but then I also want to be able to zoom into sections of the manuscript and then down to the individual characters (without pixellation) to determine if that character really is an "S" or an "A" (palaeographers will get the joke here). The problem with most lossy compressions is that the 'big picture' is great but, as you zoom in, you start to see significant degradation of the image (a by-product of the way the compression is done) which gets in the way of scholarly interpretation. For example take a high resolution TIFF image, compress it at various JPG levels and then print it full page on a high resolution printer. You will find that all the samples are virtually indistinguishable from each other until you bring out a magnifying glass and then all will become depressingly clear in terms of the drop in image quality/fidelity at the lowest file sizes.
As other contributors have indicated, there are various solutions (e.g. http://djvu.org/resources/whatisdjvu.php and Virtual Vellum) which can also help, but the resolution, compression and colour depth of the underlying image is still key. As James implies, it is also important to select a software solution that has longevity and is not going to disappear or change once your research project is completed. There are a number of Flash based viewers around but (as James also implies), Adobe's track record at maintaining backward compatibility (and even retaining technology) has not been that great in the past. Having said that, Adobe Flash has installations that run into at least the tens of millions so it is unlikely to disappear overnight but we all know Apple's stance on Adobe Flash!
So in closing, I think the most important thing to ensure, for any manuscript project at least, is that the underlying image is at the highest resolution possible (lossless) and the capabilities to zoom into 'snapshots' of that image are flexible in terms of the zoom factors allowed. However, the image 'snapshot' that gets displayed at any one time on the screen can be relatively low for optimal performance but it still needs to be an accurate subset of the original image and as a result downscaling resolution from a lossy compressed original is unlikely to work very well. As I recall, this downscaling is something that Virtual Vellum does well as does Photoshop of course but to identify one of the many tools and plug-in's which operate in a similar and appropriate way for your particular needs and custom website will of course need your further research.
Hope this helps
Best regards
Tony Harris Kellogg College, Oxford
-----Original Message----- From: dm-l-bounces@uleth.ca [mailto:dm-l-bounces@uleth.ca] On Behalf Of Birnbaum, David J Sent: 19 June 2012 01:42 To: dm-l@uleth.ca Subject: [dm-l] best practice: photographc facsimile?
Dear Digital Medievalist,
For a couple of new Internet editions of medieval manuscripts I've procured high-resolution tiff images, which I'm publishing with a "magnifying glass" overlay. This is in a "responsive" context, so the components of the page, including the images, resize to fit the window dimensions. There's a sample at:
http://suprasliensis.obdurodon.org/pages/supr001r.html
The current images were prepared quickly for a demo, and are not of consistent size or resolution. I would now like to go back to get this part of the site up to production quality, and I would be grateful for advice about how to manage the images, an area where I don't have much (= any) knowledge of best practice. I'd like small image files that load quickly, and I think I don't mind slightly lossy compression if that would reduce the file size substantially--but if that's a mistake, I'd be grateful for a warning. I think there are two questions:
1. What's an appropriate file format and resolution (size, dpi, color depth, etc.) for the base image, the one that is displayed in full to the right of the transcription, and that resizes as the user resizes the window? Currently the image files are in jpg format and vary in size from about 2M down to 250k. I can regenerate them all at a common size, resolution, color depth, etc. from the original tiffs, but I don't know whether there is any sense of best practice concerning what that size should be. If I go the lossy route, what's a reasonable value?
2. What's an appropriate degree of magnification for the magnifying-glass inset view? Currently the magnifying glass inset always shows the image at 200% of the actual file size (not the size of the page as displayed without magnification in the browser window!). You can see the difference by comparing, say, folio 30v (about 2MB) to 284r (about 270k), where the magnification is much greater in the former than the latter. I can set the level of magnification anywhere I'd like, but is there any agreement about best practice here?
By way of orientation in the question: the purpose of the full image is to allow the user to see the image conveniently alongside the transcription, verifying any moments where our editorial judgment might appear surprising or questionable. The point of the magnified inset is to let the user examine details that may not be visible at lesser magnification, such as erasures, corrections, etc. My casual impression is that 30v looks pretty good and loads reasonably quickly (although quicker would be better), but I don't place great confidence in my own casual impressions.
Thanks,
David djbpitt@pitt.edu
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gid=49320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
Digital Medievalist -- http://www.digitalmedievalist.org/ Journal: http://www.digitalmedievalist.org/journal/ Journal Editors: editors _AT_ digitalmedievalist.org News: http://www.digitalmedievalist.org/news/ Wiki: http://www.digitalmedievalist.org/wiki/ Twitter: http://twitter.com/digitalmedieval Facebook: http://www.facebook.com/group.php?gidI320313760 Discussion list: dm-l@uleth.ca Change list options: http://listserv.uleth.ca/mailman/listinfo/dm-l
I'd like to chime in and say this has been an excellent thread. Having a builtin magnifier is a terrific addition to any website using MSS, older printed matter, and individual illustrations where detail is necessary or merely nice. The magnifier does, however, depend on available high-definition images, so perhaps a two-step interface may be necessary, as with the set of 1,230 TIFF page images I want to make available with a transcription of Hall's Chronicle (1550): thumbnails clickable to full page images, with magnifier. Sounds idea. Now I just have to figure out if I can do it before the end of time.
Cheers, Al Magary SF
Hi All,
This has been a really great thread. Here some more thoughts; "Having your cake and eating is also possible with some ingenuity and elbow grease."
While JPEG2000 seems like a great idea on paper, it is often not used as the primary storage format due to concerns about lack of support in the browser (or on the desktop for that matter). A number of years ago I was working on an image portal solution at the TU Delft in the Netherlands, and made a decision to use JPEG2000 as the storage format, but convert the files to JPEG 'on-demand' linked to an image viewer. This is similar to what is done with large TIFF originals in many other projects, but the big benefit was the speed at which one could do arbitrary resolution segments from a JPEG2000 file.
For the viewer I chose the MS Seadragon Ajax version (it has lovely smooth scrolling and zooming, and cross-browser support) so no Flash was needed. In stead of pre-tiling the images (like Zoomify does, or the MS Seadragon back-end tools) I wrote a Python script that serves up the needed zoomed tiles on demand via HTTP, in effect 'pretending' to be a Seadragon back-end. It worked really well, and I was quite satisfied with the solution. For the JPEG2000 processing, we purchased a license from http://www.kakadusoftware.com/ and used the command-line tools called from Python.
Here is an example of this solution in action: http://repository.tudelft.nl/view/ir/uuid%3Aa5c3103d-e20d-4a3c-9eb6-2dc7b287...
(this is the institutional repository showing some sample pages from PDF research papers tiled for preview before downloading as some files can be hundreds of megabytes, but imagine it being a manuscript illustration of you will)
Hope this is of use as an example to others.
Etienne Posthumus Amsterdam, The Netherlands