History HSP
In the 2010s, it began to become apparent that Manuscripta Mediaevalia, the central electronic manuscript catalog developed in the late 1990s, was not keeping pace with new developments in digitization. Therefore, in 2015, an initiative was formed from among the Manuscript Centers and the Scientific Advisory Board with the goal of developing a completely new portal system that would replace Manuscripta Mediaevalia and meet the current requirements of the digital age.
The new manuscript portal “Handschriftenportal” was to serve as:
- a central information system for Germany’s manuscript heritage
- a common place for recording and providing descriptive data on German manuscripts
- central directory of digitized manuscripts
and thereby fulfill the following requirements:
- transparent and reusable development, closely aligned with the needs of research and manuscript-owning institutions
- technically innovative and based on open standards
- high usability
Project phase I
After the grant application was approved by the DFG, the first development phase of the Handschriftenportal started in October 2018. In Phase I, the essential basic functions of the HSP system were developed by summer 2022:
- a centralized data repository, cataloging, and management system
- a presentation system for descriptive information and digital images
- an intuitive retrieval system
Technology
The HSP as a technical product is a development of the IT departments of the participating companies, based on open source components. The resulting code is in turn published as open source.
The system is based on a microservice architecture and follows international standards for text and image processing with TEI and IIIF.
Data Editing
The goal of Phase I was in particular to supplement the Manuscripta Mediaevalia dataset and make it more usable. For this purpose 257 printed manuscript catalogs were scanned and their full texts were prepared for the HSP.
For the processing and enrichment of the data, a data editorial team was created, distributed among all four participating institutions. One of the tasks of the data editing team is the standardization of the metadata structure for optimal research results.