The Redford Conference in Archaeology 2012 Proceedings

October 25 to 27, 2012

Disclaimer: This webpage of the proceedings of the 2012 Redford Conference in Archaeology was noted and compiled by the website administrator, Chris Mundigler, and is intended to be a summary of the topics presented, demonstrated and discussed during the Conference. Any errors or omissions to the content or intent of the presenters listed below rests solely with the administrator and his note-taking at the Conference, and is no reflection on any of the participants at the Conference.

At present, the content below has been reviewed and approved for accuracy and intent of their respective presentations by the following participants: Avshalom Karasik

A summary article of these proceedings, also written by Chris Mundigler, can be found in the January 2013 issue of the Center for the Study of Architecture's CSA Newsletter.

Below, you'll find a detailed overview of the presentations at the Conference (with many external links in blue provided by the author throughout the text), but please note that these topics, details and concepts will continue to be added to, revised and updated as new material is researched and collected by the author, so please check back again in the future to see how this webpage evolves. Thank you.

The Redford Conference in Archaeology 2012

Thursday, October 25, 2012

At 7:00 pm, an open, public lecture was presented by Dr. Norbert Zimmermann of the Austrian Academy of Sciences on Showing the Invisible: 3-D Scanning in the Roman Catacombs. A short, 2-minute sample of this amazing visualization by Dr. Zimmermann and his team from the Institute for the Study of Ancient Culture in Vienna can be found here.

Friday, October 26, 2012

At 7:00 pm, Eric Orlin and Martin Jackson (Associate Dean), both of the University of Puget Sound, gave the welcoming remarks to the attendees, followed by the opening presentation of the Conference, Taking Archaeology Digital by Chris Mundigler (InCA Research Services, Canada). This highly-visual presentation was meant to be a Coles Notes preview address to introduce the audience to the many and varied aspects of new technologies in archaeology, including hardware possibilities (from computers and tablets to survey and photographic equipment), the amazing variety of publically-available and specialized software (both commercial and especially free-of-charge), Cloud applications in GIS, CAD and imaging for archaeological field, studio and archival purposes, and much more.

Saturday, October 27, 2012

8:30 - 11:30 am: Panel on Paperless Recording and Data Management

Chair: James Bernhard, University of Puget Sound

The following presentations were part of this segment of the Redford Conference in Archaeology:

Next Steps in Paperless Recording: An Update from the Sangro Valley [Italy] Project by Christopher F. Motz from the University of Cincinnati (Department of Classics) in which existing iPAD data entry systems were reworked to make the workflow more efficient and accurate in terms of field data recording. It was emphasized that the most important part of this process was an efficient Graphical User Interface, or GUI, to minimize errors in initial recording. One of the most significant hardware options used by this project was the Eye-Fi WiFi SD card.

Taking Survey Digital: Implementing a Paperless Workflow on the Eastern Vani [Georgia] Survey by Ryan Hughes of the University of Michigan. One of the main issues discussed here was the integration of data collected into a proper workflow with other systems, such as databases, GIS, and so on.

There were a number of objectives to this project, including decreasing person hours worked, increasing accuracy, and allowing for data comparison.

For this project, the digital database form was kept in the same format as the printed form in case there was a need to print the data, as well as to bring in previous legacy data.

One of the biggest issues in the field was note-taking, for which OneNote was used for digital note-taking in the field.

The newer technologies of the iPAD and the Trimble GeoXM (GPS) used by the project were viewed suspiciously by the locals in the areas worked by the project and were actually a possible hindrance to data collection over a paper notebook.

The methodology used for this project was: field survey; sampling; intensive collection; extensive recording; and digital integration of data.

Some of the expected and unexpected challenges encountered by this project were: equipment failure; equipment damage; local infrastructure (eg. electricity); inconsistency in previous data; local opposition; and training (and re-training) why it's all important, not just how.

Some of the equipment used by the project was: the Trimble GeoXM 2008 Series GPS for submeter accuracy; and the Eye-Fi WiFi SD card to upload photos, although this was found not to be all that useful in the field.

Some of the various software and hardware options discussed included: using ArcGIS 9.3 and 10 with good results (although ESRI's systems were found to be not well suited for fieldwork on the iPAD); Google Earth was found to be the best mapping option for fieldwork; iGIS HD and GISRoam were also found to be not well suited for fieldwork; the Trimble GeoXM GPS unit with ArcPad 10 installed was found to be good, but had the disadvantage of a small screen; the project found Windows Mobile Syncing didn't work well, so they used DropBox as their data storage option.

In terms of digital data collection, some of the pros were: the advantage of direct database entry; data standardization; data security (ie. instant backup); greater accuracy; and tools that enhance data collection.

Some of the cons of digital data collection included: increased recording time; inflexibility in data entry; reliant on adequate efficiency; training time; and lack of robust data integration at point of collection.

In terms of paper data collection, some of the pros were: greater flexibility in data entry; reduced recording time; a focus on data collection, and not on data recording; and increased student involvement.

Some of the cons of paper data collection included: record inconsistency; increased data loss; longer work days; reduced time for standardization checks; data storage limitations; and effort redundancy.

iArchaeology: Explorations in In-Field Digital Data Collection by Kathryn E. DeTore of the Proyecto de Investigacion Arqueologico Regional Ancash [PIARA at Hualcayan, Peru]. One of the main objectives for this project was iPAD contextual relational database integration using FileMaker Go for the iPAD in conjunction with iDRAW to give them an iArchaeology system.

Some of the problems encountered by the project in using this system were: rolling blackouts; some of the local population; poor cel and internet service in the field; and the need to convert printed forms to digital forms for the relational database.

Some of the advantages realized of a relational database included: instantaneous access to info; importing photos and diagrams (using iDRAW); standardization of categories; flexibility with the other field; and inventoried and cross-referenced data.

Some of the limitations of the relational database used by the project on their iPADs included: rural settings and infrastructure problems; entering and accessing photos in database; integration with GIS; and real-time updates and backups (a server and Eye-Fi WiFi independent network were needed on-site).

Going Big. Data management strategies for the large scale excavations at Gabii [Italy, about 30 km east of Rome] by Rachel Opitz from the Center for Advanced Spatial Technologies. Once research questions were framed for the project, the team was able to design a large-scale survey and a short timeframe excavation. One solution for the limited timeframe was photogrammetry instead of stone-by-stone (SBS) drawing in the field. The team ended up using about 30 photos per subject. The software PhotoScan was used to produce 3D models for orthophotos from 2D off-plane photos georeferenced for GIS with Coke bottle caps as reference points. This process provided substantial time savings compared to doing the drawings after the season in the lab. The 3D models were also visually intuitive, with interactive content. The goal was to develop new tools in 3D recording for innovative systems in the field that are useable right now.

Adobe PDFs were used for Acrobat single context viewing, and Unity OpenSource software (developed for video-gaming) was used to walk-through 3D models. The Unity software was also linked into the project database for a 3D content delivery system online.

Some of the advantages of this system were: web-based, with only one plugin required; free; scriptable to link to other sources; and the handling of GIS and modeling content.

Other options used by the project were: Declarative 3D in-browser viewers, such as X3dom and webGL, which require no plugins, but afford good control. However, these systems are not yet a stable standard.

Some of the research considerations were: deciding when to use photogrammetry and when to draw; blurring the line between GIS and modeling; and ways to archive and deliver 3D data.

It was found that GIS was used mostly for mostly spatial analysis and querying, while the video-gaming software was used to visualize that data.

The project also used Gephi and Cytoscope (social networking analysis tools where Harris Matrices became nodes in the network analysis software); Meshlab for 3D content (although Unity was their main 3D delivery software); and OpenCTM, e57 and las/laz (for point cloud data content).

In terms of archival issues, the big question was: will the system still be in use in 20 years?

The major goals of the project were to make the 3D content integral to the archived databasing, analysis and presentation of data; to be able to integrate the GIS data into a model environment that links out to data content; to make everything web-based; to make it as easy as possible to use (they found that a well-designed User Interface (UI) is everything); and to make the digital tools an essential part of publication. They also wanted to get the whole team to use the digital tools by making it so cool that even non-digital people would want to use it.

Their motto: exploring and rethinking

This presentation was followed by a commentary by Sebastian Heath of the Institute for the Study of the Ancient World: When Worlds Link: the Context of Digital Archaeology in which he emphasized the need to be able to bring archived data from 80 years ago or more into the digital archaeology used today. He said it was important not to lose the original context of an artifact (its artifact number, for instance) in bringing that research into the digital world. It was noted that it's critical to maintain the continuity of stable, long-term identifiers across sources and internet sites. Live-linking to field data was also an important issue to consider.

One of the positive aspects of digital archaeology was to expose data to the world at large so you and others can use it on the internet. To do this, he alluded to policies for the distribution of archaeological data via Creative Commons License, for instance. The internet is the tool by which we will analyze all past, current and future data. People will use your data if you post it, and there will come a time when the importance of your data will decrease if you don't share it online.

Issues such as peer-review, the use of your data by others and data manipulation once you make your research open to the web were also discussed (and could be addressed by a simple please note that this is a public draft of work in progress clause). Also considered was the issue of whether or not this web-based data can be useful in tenure and position considerations versus printed publication.

Some of the references cited during this presentation, and which may be useful to check out, were:

After lunch (and further dissection of the topics along with the sandwiches and dessert), Archeolink of the Netherlands presented their software suite that meets the needs of most archaeological information systems in: facilitating easy data entry with less errors; better data management and analysis; easier data exchange; and creating a certain level of standardization.

Academic and political resistance against an archaeological information system in the Netherlands stems mostly from a fear of change; a fear of loss of academic freedom within universities versus CRMs; a fear of inflexibility; and a fear of letting others watch your raw data.

The database basics accommodated in the Archeolink system include: the use of metadata that includes descriptive data; the use of a well-designed RDM (relational database model); and the use of reference lists embedded in the database.

Some of the benefits of using an archaeological information system, such as Archeolink, are: more efficient data entry with less errors; all data is stored and structured in the same way; easy and fast data access; each person can find everything themselves; and significant time-savings in the field and during the phases of analysis.

The Archeolink software: presents a flexible core database (mandatory or not fields, auto entry fields, et cetera); creates user access profiles; allows a single project database versus a multi-project database; allows a network environment versus a stand-alone environment; allows the use of reference lists (pick lists); and has multiple hardware options (eg. barcodes).

1:00 - 2:30 pm: Panel on Digital Publication

Chair: Jane Carlin, Director of Collins Library, University of Puget Sound

To start this segment of the Redford Conference in Archaeology, Nick Eiteljorg of the Center for the Study of Architecture presented a commentary on the State of Digital Publication, hoping for at least a 100 year life span for archaeological data in whatever digital archive or digital format it's in. The data must retain its usefulness over time in many evolving computer formats as it's translated.

Some of the challenges presented were the need for data that will last centuries, not decades, and we need to preserve everything (CAD, metadata, databases, et cetera) in a form that is open for analysis and manipulation with the original archived data preserved intact. Archives such as OpenContext should not be used for only part of the data - we need to develop full archives of all our data.

This open form of data presents challenges and issues in limiting the use of that data and needs to be carefully addressed. The data should be accessible with no special or limited software, and the question that must be asked is whether there are reasons for supplying the data where you also need to supply the software, or should the data be more open access for others to use in their own way.

And while we're at it, we have to ask ourselves:

  • what is the audience that we're preparing the data for? the general public? or academics?
  • what's the time horizon for the data? a decade? a century?
  • what's the content of the data? text only? or data and text?
  • and how do we provide the data for people to use?

Archaeologists and archivists have to co-operate, and dig talk about how to get where we want to be.

As for electronic publication and tenure issues, we also have to ask ourselves if we can do it electronically within our institutions. If so, we have to talk to the archivists about how to supply that data in the proper form.

Some resources from the Archaeology Data Service in the UK that might help with some of these issues and questions are the ADS Guides to Good Practice.

From Khipu Knots to Instant Tweets: Transition to the New Media Platforms in Archaeology by Anastasiya Travina of Texas State University-San Marcos. This presentation began by explaining the centuries-old khipu, or talking knots and their communication value as basically a 3D binary encoding process developed by the Inca of South America. These khipu contained accounting and calendrical values, with some mythology built into them as well, and most were destroyed by 1583 AD by order of the invading Spaniards.

Fast-forward to the 21st century and our more modern forms of communication and the shortcomings of traditional publications were discussed, such as accessibility due to subscription issues to publishing houses, and so on.

Some of the advantages of Open Access publication include the fact that it accelerates innovation and scientific discovery; promotes education of the general public; eliminates elitism; and it's easy to present and promote research.

Some of the disadvantages of Open Access publication include rankings; prestige; reputation; and complicated pricing strategy issues; as well as a peer review system that doesn't come from an established publishing house.

How do we foster innovation while maintaining the credibility and reputation of the author?

The presentation referenced the Elsevier case of 2012 where 12,861 scientists disagreed with the publishers high prices for individual journals. It was mentioned that the Elsevier market strategy forced libraries to bundle subscription to journals.

Also referenced in the talk was thecostofknowledge.com; arXiv in reference to pre-peer reviewed e-publishing (ePrints) where scientists can submit their work for pre-peer review before regular peer review; the Grigori Perelman case and trying to solve the Poincare conjecture.

Types of Open Access projects mentioned were Gold for immediate, delayed and hybrid publication (where the author pays money for processing (at about $3000 / article); and Green for publication anywhere on the internet.

Successful Open Access projects include PLOS ONE, but as yet there is no dedicated archaeological e-publishing service available.

Also discussed were social media (Twitter, Facebook and blogging platforms) and Open Access educational platforms, such as academia.edu with more than 2 million users and a daily new member rate of some 4,000 people!

Additional Open Access references included Coursera and EduX which are free online courses that use social media networks and forums to capture modern audiences, and share peer-reviewed articles with virtual students.

Some of the conclusions reached in this presentation were that the increasing popularity of Open Access jeopardizes the selectivity of journals and the established status quo of publishers, and creates anxiety among publishing houses.

Why not share the knowledge with society for free?

What Traditional Archaeological Publishers are Doing Wrong and How to Fix It by Andrew Reinhard of the American School of Classical Studies at Athens discussed printing and Open Access houses and publishers as well, and also noted that there is no standard of publication within archaeology.

Some of the problems discussed revolved around the communication of research and how Open Access and open peer-review solves some of this problem; the flatness of traditional publication (archaeology is not suited to a 2D publication format; archaeology needs a 3D environment of scans, models, datasets and links to other sources); impracticality and the emotional attachment we have to print media (which also include storage, searchability and portability issues of traditional publication; searchability and portability issues are solved with e-publications that can be linked to dynamic imaging).

It was felt that born digital publications must be encouraged to publishers, and that tenure issues could be alleviated when institutions embraced the e-publication format.

In terms of time, print scholarship can take years to come to fruition with writing, editing, peer-review, revisions and final publication, while e-publishing shortens the whole process and scholarship can be as it happens.

As printing and distribution costs can be prohibitive for traditional print publication, and can sometimes actually amount to half of the overall budget, all-digital publication can save costs without compromising quality and without taking up shelf space.

One of the major problems with e-publication at this point, though, is the fact that, so far, there is no standardization in archaeology and no best-practices have been established - there is a real need to form a consortium with publishers to set standards.

In terms of preservation, to date most previous archaeological work has been in print, with no unified storage capability or format for that storage; standard formats must be established and useable in both online and offline versions.

We need to stop thinking of archaeological publications as books (in other words as images beside text with notes) and move to more unconventional formats.

2:45 - 3:45 pm: Panel on Uses of GIS

Chair: Barry Goldstein, University of Puget Sound

The following presentations were part of this segment of the Redford Conference in Archaeology:

Increased Analytical and Visualization Capabilities in Landscape Archaeology through the Use of GIS Field Applications by J.M.L. Newhard of the College of Charleston who talked about the Avkat Project in Turkey, which included an intensive survey of the environs from a time between the 4th and 13th centuries AD.

Basic questions during this presentation asked: what do we have? and when do we know we have it? and then, what does it all mean?

In trying to answer these questions, the project used PDAs running ArcPad 7 to accurately map the survey areas for both on-site and off-site scatters to define significant densities and accurately represent spatial locations.

Some of the main considerations in this process were the time between data collection and analysis, and regional interpretation.

Some of the needs that had to be addressed during the project included: where to put a lot of people on data entry at one time with multi-access at one time (ie. PDAs to one GIS server and one database server to client laptop for data entry and analysis).

Various analysis techniques for field data were also discussed, such as interpolation versus kernel density function (it was found that the latter proved to be the best method for accurate representation).

Some of the conclusions were that the process must allow you to develop a series of functional parameters for site identification; model those parameters in GIS; assign values to features; and then compare assigned values.

GIS, Google Earth, and Cost-Surface Modeling for Ancient Mediterranean Trade Routes by Ulrike Krotscheck of Evergreen State College mentioned that conventional maps restrict the constraints of connectivity for projects such as modeling ancient Mediterranean trade routes and trade centers.

Early Babylonian and Greek views of the 3D world were in 2D spatial representations and it was a real consideration how people represented time-to-travel in ancient maps.

Our modern technologies, on the other hand, allow us to create open-source, accessible GIS (such as ORBIS), which allow variables to be entered that account for oceanographic features, time-to-travel, and much more to plot trade routes by sea or land in cost-surface modeling. They also allow us to place this ORBIS data into Google Earth for topographical analysis of routes (however, this doesn't always accurately depict or reflect the topography) and use satellite image technology to analyze the ancient routes.

While a useful research tool, the conclusion was that the process was ineffective in simulating Greek relationships for time-distance, and may actually be misrepresentative and an over-simplification of the actual routes taken. It was nonetheless, a useful tool for cost-surface analysis.

4:00 - 5:30 pm: Panel on Imaging Technology

Chair: Brett Rogers, University of Puget Sound

To start this segment of the Redford Conference in Archaeology, Norbert Zimmermann of the Austrian Academy of Sciences presented a commentary on the need for better communication between archaeologists and end-users of their data.

3D laser scanners have developed to a point where it's not a matter of if 3D scanning should be used, but what scanner and software should be incorporated. Point-cloud or mesh systems versus optical systems were discussed and it was emphasized that we have to fit the technology to the need.

How long will the digital information remain viable into the future?

With our modern technology now, consideration must be given not to the quality of the data captured, but to the workflow used to capture that data (for example, taking free photos and triangulating them to the point-cloud data to produce photo-realistic modeling).

For instance, with the Roman Catacombs projects, 3D modeling with 3D Analyst software was used in conjunction with photos taken from a panorama-mount camera from many vantage points where X, Y, Z points-in-common could be triangulated to combine 3D scans with photos, which is a semi-automatic process with digital software.

Meshlab was used for modeling, and photos were pre-processed in PhotoShop before texturing into the 3D composite model. Point-cloud visualization was done with Scanopy, which provided visual as well as metric analysis, as well as pre-defined, automated or manual camera paths that can be set for walk- and fly-throughs impossible in the real world, and for positioning markers that can be set to link to databases and other external resources.

With this process, architectural features can even be moved, if necessary, to view the 3D model better - something that is not possible in the real world.

By being able to link databases within point-clouds, models can be rendered as 3D databases and libraries for reports and much more within the 3D point-cloud and photorealistic model environments.

This whole process is very cost- and time-effective for 3D documentation and rendering, with the added benefits of being able to view the 3D model from real-world-impossible vantage points. The monument becomes accessible in new and impossible ways which are not available on-site, and can be accessed away from the actual monument from anywhere in the world.

For some visualizations of these processes, check out the Jacobs University Bremen nearly-automated data collection system video and the Austrian Academy of Sciences video.

Computerized Documentation and Analysis of Archaeological Artifacts by Avshalom Karasik of the Hebrew University in Jerusalem, Israel rounded out the Conference presentations by discussing the advantages of the use of optical scanners for the documentation of archaeological small finds and artifacts.

Besides the accuracy and the digital format of the data, one of the main benefits of the process of scanning artifacts is speed: generally 15 artifacts can be drawn per day using traditional manual techniques; many more than that per hour can be processed by optical scanning. However, these 3D scans can only be exterior views of the artifacts, and it's expected that archaeologists bring the objects in for scanning before restoration so that interior details and views can also be scanned.

Analysis of 3D scanning allows for research beyond the visual level, and it attempts to correlate and tie-together vast assemblages of pottery in a time- and cost-effective way.

Some problems are often encountered when sherds are broken in all kinds of forms and sizes, which makes accurate positioning and angles for the 3D scanning paramount.

Innovative algorithm extracts from the 3D scanning allow optimal overlapping of consecutive cross-sections to position the object (potsherd) accurately, eliminating the traditional need of finding the sherds tangent on a flat surface.

Bifacial lithics can also be scanned in a similar way, as can complex tablets and other small finds.

Because the scanned data is digital and numeric, distance matrices and pottery clusters can be determined automatically based on the average profiles of the ceramic objects.

Any 3D scanner can be used with this algorithm and MATLAB GUI in the field by anyone, and it can also be used with photogrammetry as well.

In summary, the system offers a complete and automatic 3D documentation and profile drawing procedure for almost any archaeological find, and it is very possible that this 3D technology could replace traditional imaging and documentation methods.

5:30 - 6:00 pm: Concluding Remarks by Nick Eiteljorg hinged on the concept that speed of change in our modern world is a given and more change is already happening, and that includes archaeology. Bigger changes than we can predict are on the horizon in terms of technology, and we, on the human side, have to be ready to accept that change as it comes.

We have to be careful of the change, as well, though, and be analytical of that change and not just accept change for the sake of change.

We, as archaeologists, didn't develop any of the systems discussed in the Conference, but we have to embrace them, carefully and cautiously.


If you have any questions or comments regarding the Conference in general or if you have any comments regarding this Conference website or the University of Puget Sound Archaeology website and its content, please feel free to contact Chris Mundigler.