Skip to main content
Favorite
Add To List

NISO Plus 2021

Global
February 22-25, 2021
NISO Plus 2021 was our first virtual conference, held in February of 2021. A global undertaking, NISO Plus 2021 had over 800 participants from 26 countries come together to have a conversation about the state of the information ecosystem. Here you'll find both the presentations and discussions from that event.

Matching Videos

89 Matching Videos
NISO Plus 2021

A focus on accessibility

52:26
Why are organizations continuing to be blind to accessibility?
When we see a person struggling in a wheelchair to navigate a curb with no ramp, we get frustrated and help the person out. In most cases we don't see people who are struggling to read digital content and therefore the issue is not as obvious to the public. It is not until a lawsuit has been filed that the public hears about the persistent issues around digital content accessibility. Even today, we have organizations that have not addressed accessibility to make digital content available to the sight impaired. Why are businesses and organizations still blind to accessibility and what can they do to become compliant?

The evolution of accessibility: upgrading the experience for all users
From learning disorders to reading disorders to hearing, visual, and physical impairments, how do we ensure that every user is able to use the library to its fullest potential? This session will discuss the needs of ALL users, and answer a range of questions: How should digital services support the different kinds of accessibility requirements? How do we make all aspects of library systems equally available to all users by adhering to accessibility standards and usability best practices? How do we actually comprehend our users’ accessibility needs in the first place? And how do we keep our services continuously up-to-date in order to meet our users’ accessibility needs as we deploy new tools in our libraries?

Looking for an accessible open science: overcoming barriers within SciELO Network 
People with disabilities and / or the elderly, who currently represent 45% of the world population, especially those associated with Higher Education and research institutions, have reported significant difficulties in guaranteeing their rights to accessible information. Therefore, it is important to portray the current situation of accessibility in the interfaces of the SciELO Brazil collection and in their respective digital assets. In light of this objective, an accessibility assessment was carried out on the SciELO Network website pages. The results indicated absences of: alternative text for images; link labels; page language indication; among others. Considering 82,716 (19.5%) scientific articles from 9,045 volumes published between 2017 and 2020, 205,921 figures and 173,976 tables were prospected. Although 95.05% of the tables are encoded in HTML (98.29% with descriptive labels and 98.18% with captions), none of the analyzed articles presented essential elements for the transmission of information to assistive technologies. This scenario highlights the need to adopt standards to promote accessibility in all stages of the flow of production and dissemination of knowledge.
Much has been written about the dangers of bias — conscious and unconscious — in the creation of artificial intelligence.  Our speakers will share their thoughts on the challenges of metadata creation for and by AI, and how standards and best practices can help address them.
Need help understanding the connections between authentication, authorization, and access to resources in the ever-changing world of scholarly resources? Join us for an overview of emerging services (GetFTR, Seamless Access, and more), learn how they are connected (or not!), and get involved in the discussion on how sorting access can get us closer to meeting the patrons’ expectations for a personalized research experience. The session is an opportunity to expand your appreciation of the role your organization plays in the wider information ecosystem.  
Ebooks offer enormous accessibility opportunities for print-disabled individuals, from the quality of the reading experience to the timely availability of content. These achievements are largely due to huge efforts in the areas of both open standards and modern software tools. However, due to the complex nature of the publishing and distribution supply chain, end users may still face challenges when they simply want to acquire and read an ebook. Our speakers will introduce what ebook accessibility is, who it benefits, what open standards to be aware of, how to check ebooks for accessibility, and how to ensure that accessible content is recognized as such in distribution channels.
They will also introduce the open source & non-profit initiatives which greatly ease the setup of an accessible publishing workflow, linking together publishers, distributors, ebooksellers, public libraries, universities and end users.
This session will allow participants and speakers to discuss how we can ensure that the whole ebook publishing and distribution supply chain is well aligned towards providing content to everybody, what best practices are required, and who needs to be involved.
NISO Plus 2021

An introduction to NISO

49:08
Join us for an introduction to all things NISO! Curious about our work? Want to learn more about current Standards and other NISO projects? Join us for an introduction and to ask us questions.
NISO Plus 2021

An introduction to NISO

49:08
Join us for an introduction to all things NISO! Curious about our work? Want to learn more about current standards and other NISO projects? Join us for an introduction and to ask us questions.
NISO Plus 2021

Conference Opening

30:19
Welcome to NISO Plus 2021! Join us for an overview of the conference and a welcome from our Conference Chair, our Executive Director, and members of our Planning Committee.
Indigenous Knowledge affects all sectors of the global economy such as health, food and agriculture, architecture, music, etc...the list goes on and on. But this knowledge is going extinct because it has not been harnessed and preserved. How should we as information personnel be treating this kind of knowledge? Can we document them? Can we apply standards to them? How do we treat such knowledge and harness them for global development?
Everyone is painfully aware of the degree to which copyright law in the United States hasn't kept pace with the modern digital world, and how existing standards may or may not ""fit"" well with digital goods. Especially this past year, digital lending in libraries has been pushed to its limits and, some would say, beyond.

Join Kyle and Carlo for a discussion about these limits, the new model of Controlled Digital Lending, and ask questions of the experts. 
Serials crisis: can data help treat this chronic condition?
When it comes to library budgets, there is an ever-widening gap between university budgets and library budgets, with library budgets continuing to shrink even when the university budgets may increase. This gap is neither new nor novel; but now in this era of a global pandemic, moves to online learning, unprecedented changes in higher education budgets, the problem is worse than ever and likely poised to accelerate at a rapid pace.

What may be different, however, is that today’s deep well of data can help stakeholders track down, understand, and respond to the challenge. But what data is most useful and how can we gather it?

In this session, we will take a global view from leading experts in both consortia and institutional libraries about data hunting and innovative ways data can be used to make the most of purchasing spend. Visibility into current pricing trends, including by discipline, business model and publisher helps inform the purchasing context. At-a-glance intelligence about business models and deals, as well as descriptions of emerging model and deal types, help early career professionals and senior staff alike keep current on this rapidly changing landscape.

We ask participants to come prepared to share experiences advocating for funding from institutions, as well as turning those dollars into access to the resources in demand by constituent academics and researchers.

Impact transparency: creating visibility into research outcomes
Impact and outcome measurement are major topics in the research community. Trillions of dollars are spent annually on research programs, yet far too many organizations still struggle to answer the basic questions: What was the overall impact of the funding on research and on the researchers’ careers? How do I link the research I funded or received to publications, patents, clinical trials and other outputs?

Why is it so hard to answer these questions?

While many research-oriented organizations aspire to operationalize their impact and outcome tracking and reporting, many are still reliant upon inadequate or incomplete datasets and management systems.

This talk will explore how advances in metadata, artificial intelligence, and open research infrastructure make it possible to create unprecedented transparency into outcomes from award to publication and beyond. Secondly, this discussion will describe how transparency into outcomes benefits the entire research ecosystem.

Leveraging years of grants management data, we’ll show graphically how funding organizations and research institutions can precisely identify outcomes – years after a grant has concluded as well as trace the arc of their researchers’ work from award to publication and beyond.
FAIREST of them all
Findable, Accessible, Interoperable, Reusable. We all think FAIR data is a “good thing” don’t we. Who could be against something that, once stated, is so blindingly obvious? If it’s not findable, and usable then why are we spending time, money and resources keeping it? And it’s a great acronym as well. FAIR. It “does what it says on the tin"""". If you’ve got a good acronym you’re half way there when it comes to hearts and minds….

But I think FAIR doesn’t go far enough. There are major impact factors that need to be considered alongside FAIR. What about the costs involved in making information FAIR? Where does trust come in? What about the environmental impact?

Time for a new acronym that goes beyond simply “FAIR”.

What should that be? This talk will propose one possible extension to the concept...""

Leveraging FAIR Data Principles to Construct the CCC COVID Author Graph
A knowledge graph is an innovative and revealing visual exposition of data. Display of such data is a powerful way to explore connections and query relationships among different entities, but only if the underlying data is of high quality.

The global research community’s effort to fight COVID-19 has led to an explosive increase of manuscripts submitted to peer-reviewed journals. With this influx of submissions, publishers have recognized limitations in the existing methods for identifying appropriate peer reviewers to validate the accuracy, impact, and value of these manuscripts.

To address this challenge, CCC developed the COVID Author Graph, a knowledge graph highlighting peer review data focused on authors who have published in areas with special attention to coronaviruses, SARS, MERS, SARS-CoV-2, and COVID-19. This new approach helps publishers leverage data to aid in the accelerated identification of peer reviewers.

During this session, presenters will illustrate how the FAIR data principles –Findability, Accessibility, Interoperability, and Reuse of digital assets – served as a trusted foundation for building the COVID Author Graph. They will share key learnings with an emphasis on how reliable levels of data quality that FAIR principles make possible enable more sophisticated analysis and help organizations derive actionable business insights.

FAIR (meta)data - low-hanging fruit for scholarly publishers (Brian Cody)
Drawing on experience working with journal publishers to collect/enhance/format metadata, this session section overviews FAIR principles and shares concrete steps for beginning your FAIR journey.
NISO Plus 2021

Digital Humanities

49:22
The Digital Humanities community is one of the most active in the information standards world — from supporting good metadata and standardized taxonomies to data preservation, and so much more! This panel brings together digital humanities experts from South Africa, the UK, and the US to discuss when, how, and why they use standards in their work, and what improvements and additions are needed.
NISO Plus 2021

Discoverability in an AI world

43:11
The information community is increasingly reliant on AI-driven search engines to enable our content to be discovered by users. Our speakers will highlight what this means for content creators, curators, and users alike — in terms of both challenges and opportunities for us all.

Andromeda Yelton will be speaking, while Christine Stohn and Karim Boughida will be available for questions during the discussion period.
Join us for an introduction to all things NISO! Curious about our work? Want to learn more about current standards and other NISO projects? Join us for an introduction and to ask us questions.
The impact of the COVID19 pandemic and the Black Lives Matter protests in 2020 were (and continue to be) felt all around the world — including the information world: the rapid shift to online learning; the dramatic increase in the use of digital resources; the challenges of working from home; budget and hiring cuts and freezes. Which of these changes — and more — will be permanent? What will the information community look like five or ten years from now? The lessons we learn will help us better understand the fragility or resilience of our organizations and structures, our processes and policies. This session includes perspectives from librarians, publishers, and vendors from around the world about what their experiences in 2020 have taught them.
Join us for an introduction to all things NISO! Curious about our work? Want to learn more about current Standards and other NISO projects? Join us for an introduction and to ask us questions.
The impact of the COVID19 pandemic and the Black Lives Matter protests in 2020 were (and continue to be) felt all around the world — including the information world: the rapid shift to online learning; the dramatic increase in the use of digital resources; the challenges of working from home; budget and hiring cuts and freezes. Which of these changes — and more — will be permanent? What will the information community look like five or ten years from now? The lessons we learn will help us better understand the fragility or resilience of our organizations and structures, our processes and policies. This session includes perspectives from librarians, publishers, and vendors from around the world about what their experiences in 2020 have taught them.
Why are organizations continuing to be blind to accessibility?
When we see a person struggling in a wheelchair to navigate a curb with no ramp, we get frustrated and help the person out. In most cases we don't see people who are struggling to read digital content and therefore the issue is not as obvious to the public. It is not until a lawsuit has been filed that the public hears about the persistent issues around digital content accessibility. Even today, we have organizations that have not addressed accessibility to make digital content available to the sight impaired. Why are businesses and organizations still blind to accessibility and what can they do to become compliant?

The evolution of accessibility: upgrading the experience for all users
From learning disorders to reading disorders to hearing, visual, and physical impairments, how do we ensure that every user is able to use the library to its fullest potential? This session will discuss the needs of ALL users, and answer a range of questions: How should digital services support the different kinds of accessibility requirements? How do we make all aspects of library systems equally available to all users by adhering to accessibility standards and usability best practices? How do we actually comprehend our users’ accessibility needs in the first place? And how do we keep our services continuously up-to-date in order to meet our users’ accessibility needs as we deploy new tools in our libraries?

Looking for an accessible open science: overcoming barriers within SciELO Network 
People with disabilities and / or the elderly, who currently represent 45% of the world population, especially those associated with Higher Education and research institutions, have reported significant difficulties in guaranteeing their rights to accessible information. Therefore, it is important to portray the current situation of accessibility in the interfaces of the SciELO Brazil collection and in their respective digital assets. In light of this objective, an accessibility assessment was carried out on the SciELO Network website pages. The results indicated absences of: alternative text for images; link labels; page language indication; among others. Considering 82,716 (19.5%) scientific articles from 9,045 volumes published between 2017 and 2020, 205,921 figures and 173,976 tables were prospected. Although 95.05% of the tables are encoded in HTML (98.29% with descriptive labels and 98.18% with captions), none of the analyzed articles presented essential elements for the transmission of information to assistive technologies. This scenario highlights the need to adopt standards to promote accessibility in all stages of the flow of production and dissemination of knowledge.

Much has been written about the dangers of bias — conscious and unconscious — in the creation of artificial intelligence.  Our speakers will share their thoughts on the challenges of metadata creation for and by AI, and how standards and best practices can help address them.

Need help understanding the connections between authentication, authorization, and access to resources in the ever-changing world of scholarly resources? Join us for an overview of emerging services (GetFTR, Seamless Access, and more), learn how they are connected (or not!), and get involved in the discussion on how sorting access can get us closer to meeting the patrons’ expectations for a personalized research experience. The session is an opportunity to expand your appreciation of the role your organization plays in the wider information ecosystem.  
Ebooks offer enormous accessibility opportunities for print-disabled individuals, from the quality of the reading experience to the timely availability of content. These achievements are largely due to huge efforts in the areas of both open standards and modern software tools. However, due to the complex nature of the publishing and distribution supply chain, end users may still face challenges when they simply want to acquire and read an ebook. Our speakers will introduce what ebook accessibility is, who it benefits, what open standards to be aware of, how to check ebooks for accessibility, and how to ensure that accessible content is recognized as such in distribution channels.

They will also introduce the open source & non-profit initiatives which greatly ease the setup of an accessible publishing workflow, linking together publishers, distributors, ebooksellers, public libraries, universities and end users.

This session will allow participants and speakers to discuss how we can ensure that the whole ebook publishing and distribution supply chain is well aligned towards providing content to everybody, what best practices are required, and who needs to be involved.

Everyone is painfully aware of the degree to which copyright law in the United States hasn't kept pace with the modern digital world, and how existing standards may or may not ""fit"" well with digital goods. Especially this past year, digital lending in libraries has been pushed to its limits and, some would say, beyond.

Join Kyle and Carlo for a discussion about these limits, the new model of Controlled Digital Lending, and ask questions of the experts. 

Serials crisis: can data help treat this chronic condition?
When it comes to library budgets, there is an ever-widening gap between university budgets and library budgets, with library budgets continuing to shrink even when the university budgets may increase. This gap is neither new nor novel; but now in this era of a global pandemic, moves to online learning, unprecedented changes in higher education budgets, the problem is worse than ever and likely poised to accelerate at a rapid pace.

What may be different, however, is that today’s deep well of data can help stakeholders track down, understand, and respond to the challenge. But what data is most useful and how can we gather it?

In this session, we will take a global view from leading experts in both consortia and institutional libraries about data hunting and innovative ways data can be used to make the most of purchasing spend. Visibility into current pricing trends, including by discipline, business model and publisher helps inform the purchasing context. At-a-glance intelligence about business models and deals, as well as descriptions of emerging model and deal types, help early career professionals and senior staff alike keep current on this rapidly changing landscape.

We ask participants to come prepared to share experiences advocating for funding from institutions, as well as turning those dollars into access to the resources in demand by constituent academics and researchers.

Impact transparency: creating visibility into research outcomes
Impact and outcome measurement are major topics in the research community. Trillions of dollars are spent annually on research programs, yet far too many organizations still struggle to answer the basic questions: What was the overall impact of the funding on research and on the researchers’ careers? How do I link the research I funded or received to publications, patents, clinical trials and other outputs?

Why is it so hard to answer these questions?

While many research-oriented organizations aspire to operationalize their impact and outcome tracking and reporting, many are still reliant upon inadequate or incomplete datasets and management systems.

This talk will explore how advances in metadata, artificial intelligence, and open research infrastructure make it possible to create unprecedented transparency into outcomes from award to publication and beyond. Secondly, this discussion will describe how transparency into outcomes benefits the entire research ecosystem.

Leveraging years of grants management data, we’ll show graphically how funding organizations and research institutions can precisely identify outcomes – years after a grant has concluded as well as trace the arc of their researchers’ work from award to publication and beyond.
NISO Plus 2021

Discussion: Digital Humanities

23:15
The Digital Humanities community is one of the most active in the information standards world — from supporting good metadata and standardized taxonomies to data preservation, and so much more! This panel brings together digital humanities experts from South Africa, the UK, and the US to discuss when, how, and why they use standards in their work, and what improvements and additions are needed.

The information community is increasingly reliant on AI-driven search engines to enable our content to be discovered by users. Our speakers will highlight what this means for content creators, curators, and users alike — in terms of both challenges and opportunities for us all.

Andromeda Yelton will be speaking, while Christine Stohn and Karim Boughida will be available for questions during the discussion period.

Unlocking JSTOR & Portico for Text Analysis & Pedagogy 
Text analytics, or the process of deriving new information from pattern and trend analysis of the written word, is making a transformative impact in the social sciences and humanities. Sadly, there is a massive hurdle facing those eager to unleash its power: the coding skills and statistical knowledge that text mining requires can take years to develop; moreover, access rights to high quality datasets for text mining are often cost prohibitive and may include further license negotiations. Over the past several years, JSTOR’s Data for Research (DfR) has addressed some of these issues, providing metadata and full-text datasets for its archival content. In January, ITHAKA – the organizational home of JSTOR and Portico – announced a completely new platform that incorporates DfR’s features, as well as adding visualization tools and an integrated analytics lab for learning and teaching text analysis. At NISO Plus, key members of the ITHAKA team will describe the design of this new multifaceted platform and highlight how its components can intersect with the needs of librarians, publishers, educators, students, and faculty. The presenters will emphasize the platform’s hosted analytics lab, where librarians and faculty can create, adapt, and adopt text mining analysis code that works with publisher content for data science instructional sessions.
Collections as Data: From Digital Library to Open Datasets
Collections as Data “aims to encourage computational use of digitized and born digital collections” (https://collectionsasdata.github.io/statement/), but how do you get started developing a Collections as Data program, especially with existing staff and technology resources? The Digital Library Services department at the University of Utah will share their practical approach to Collections as Data, ranging from releasing oral history data for text mining to developing a metadata transcription project to create a new historical dataset of mining labor employment records. We will also discuss developing partnerships with digital humanists on campus and the potential uses of the collections we’ve released to the public. We will also show how analyzing digital collections with a digital humanities approach can provide new insights into potential new processes for descriptive metadata creation.
Public Humanities: Challenges and Opportunities
Directors of leading humanities associations and initiatives discuss the impact and the challenges of public humanities, as an idea and as a method of study. What is public humanities, and why is it important? What forms of scholarly and creative output does it encompass? How can it be recognized and supported at institutional, national, and international levels? How do humanities scholars engage with a broad, diverse audience? How might these encounters change the nature and course of humanities study?

Join Modern Language Association Executive Director Paula Krebs, University of Virginia President's Commission on Slavery and the University Chair Kirt von Daacke, and University of Illinois Chicago Engaged Humanities Initiative Director Ellen McClure for a roundtable discussion and lively Q&A.

A two-part session focusing on identifiers, metadata, and using them to make connections!

Part 1: Hocus pocus: Mixing open identifiers into metadata makes connections between research work
Journal articles don’t exist in a vacuum. There is increasing awareness of the need to reliably connect articles, data, affiliation, contributor and funding information to expose trends and opportunities in the research ecosystem, enable reliable streamlined reporting to key stakeholders and to ensure transparency and trust in research.
To support this, metadata for research objects can’t exist in a vacuum either. It needs to reflect these relationships and incorporate a range of persistent identifiers to do so. And it needs to be open so that it can populate through different systems. DataCite, ROR, Crossref and ORCID have been working together to look at how relationships are asserted between articles, data and other content types, and what connecting research objects to other identifiers helps us see: which outputs resulted from a research grant, which institutions are particularly strong in which areas, where and how are openly available data and software used, and who researchers are collaborating with. We can also use these existing relationships to infer further connections via tools like the PID Graph and the community can (re)use our open metadata to build new services and tools.

Join representatives from ORCID, DataCite, ROR and Crossref as we share the kind of information that’s already available, what work we still have to do and our plans to enhance this in collaboration with our communities.""

Part 2: Data visualization
In a crowded digital media landscape the first question many authors, editors and publishers ask is, how can I make sure that my research is widely noticed and well understood? The answer, say Deb Wyatt and Donald Samulack of Cactus Communications, often lies in visualization.
As the research communication landscape changes, we continue to unlock more efficient and impactful ways to communicate research in highly visual and engaging ways. Video and graphical content formats are now core components of research publishing: just as important for the understanding of science and scholarship as published articles and monographs.
The second critical question is, how do we share research content reliably and accurately, in line with established community standards of rigor, ethics and integrity? As we embrace new formats for research communication, the challenge is to ensure that we continue to apply the same standards of rigor, transparency and FAIR principles to this derivative content.

Concluding this session, Dario Rodighiero will present his data visualization of ... NISO Plus 2021! He notes that, in sociology, the digital traces are these data that humans leave behind during daily activities. Open data and identifiers are not only instruments to make science more transparent and accessible, but they also represent a meaningful way to study the behavior of scientists. This talk aims to present how these digital traces can be used to observe the academic environment.
Digitalization is enabling the creation of all sorts of new and innovative forms of scholarly publishing. In this
session, experts from three very different publishing organizations will share examples and discuss what they think the future holds.
NISO KBART Validator App 
How can we enhance trust in the quality of KBART files? Endorsement process is one way. Automated validation could be another way.

Content providers can have their KBART files endorsed by NISO. But the endorsement process consists of manual checks and thus can be a long process, with multiple file revisions and much back-and-forth communication required. The KBART Standing Committee aims to formalize and speed up its endorsement process by automating a number of validation tasks, thus providing more time to analyze parts of the files that are trickier to check automatically. Automated validation could also occur upstream, by content providers checking their KBART files post-production, or downstream, by knowledge bases checking KBART files before ingestion. What if all these scenarios relied on a shared tool?

The NISO KBART Validator app has two goals:
* Short term : ease NISO’s endorsement process by automating file checks that can be automated
* Long term : provide the community with a common validator app

The NISO KBART Validator app is currently under development. This session will provide a demo of the tool and insights about its roadmap. We want this app to be community powered: we’ll take time in this session to discuss where you and your organization could help, with or without developers.""

The Package ID: Seeking Sanity through Standards
Content providers often bundle offerings into pre-set collections by subject, year, or some other scope so libraries can select packages that best fit their needs. Publishers also sell individual journals and books, allowing libraries to select content title-by-title. These options provide an effective approach to selling content. However, they produce a confusing, ever-changing tangle in knowledgebases.

Currently, package names are used identifiers, which introduces challenges for knowledgebase providers and librarians. Marketing pages, access platforms, licenses, invoices, and knowledgebases may all use different names for the same packages. Additionally, package names change, differ on various systems, and different bundles often have very similar names.

Knowledgebase providers load the content bundles to serve as the basis for discovery, linking, and ERM processes. Using package names as the identifier complicates always uniquely identify collections. The problem also affects automatic updates to the knowledgebase, in general, or within a specific library’s holdings. Likewise, librarians have a difficult time determining which of the many similar-sounding packages matches their licensed content.

Ultimately, all parties want to ensure that the licensed content is represented and enabled in knowledgebases for discovery and linking. Consistent unique identifiers may offer a way to improve efficiency and reduce confusion.
Research begins and ends with the library
Emerging options for open journal publishing ""While journal publishing workflows reach the same ends whether inside of a library or a publishing house—namely, supporting research outputs—libraries are ideally positioned to enhance open research infrastructure by providing end-to-end service for researchers. Libraries already support institutional repositories and provide access to millions of titles; meanwhile librarians offer support for collecting, analyzing, and visualizing scholarly data sets. By publishing journals and other materials, libraries participate in another key part of the research lifecycle. Additionally, most library publishers follow the platinum open access publishing model (i.e., free of costs to both authors and readers), which can help amplify underrepresented voices in scholarly publishing.

Despite the benefits libraries can offer to researchers and societies by publishing their scholarship, challenges abound with building out the infrastructure for executing publishing workflows. The Library Publishing Coalition seeks to fill this gap by helping libraries develop, maintain, and improve publishing services, workflows, and infrastructure. At this point, library publishing is a work in progress, but the foundation has been laid, and libraries have the potential to be a formidable force in the open research movement by taking on the publishing of open access journals.

From creation to consumable knowledge: supporting research workflows in an open infrastructure
It goes without saying that the library fulfills a key role delivering those services to users in support of the acquisition and dissemination of knowledge. Certainly today, as humanity seeks to address some of its most pressing scientific challenges, open and reliable access to information takes center stage. Where and how then can the role of the library evolve to support research and speed the time from its creation to consumable, living knowledge?

This discussion will focus on ways in which libraries and vendors alike can support research in an open infrastructure. The presenter will look at researcher needs to conduct and share their work, while considering how the library – on its part – can best collect, preserve, disseminate and manage the research. Specific attention will be paid to open collaboration platforms in support of open science. And, the presenter will discuss how open source solutions as well may best support evolving needs for innovation in library workflows and the delivery of new services to users in support of research, teaching and learning.

Linked data is central to the semantic web and, therefore, to the future of information sharing. Our two expert speakers will share their views on the current state of play with linked data in the information community, and what they predict the future will bring.

NISO Plus 2021

Discussion: Metadata and Discovery

19:50
What You Can Do to Help Promote Transparency in Discovery -- and Why
NISO recently updated the Open Discovery Initiative Recommended Practice (https://www.niso.org/publications/rp-19-2020-odi), which outlines best practices for working with library discovery services. It defines ways for libraries to assess the level of content provider participation; streamlines the process by which libraries, content providers and discovery service providers work together; defines models for “fair” linking; and suggests usage statistics that should be collected for libraries and for content providers. The recommendations in this document, created by members of the Open Discovery Initiative Standing Committee, enable libraries, discovery service providers, and content providers to work together to the full extent of their abilities—providing the most effective and rich experience to end users.

In this presentation, you will learn about the Open Discovery Initiative, what changes were included in the 2020 revision of the ODI Recommended Practice, and delve more deeply into several areas: free-to-read content, fair linking, and the key elements included in the newly added library conformance statements.

Better metadata makes a difference
Libraries create, ingest and use metadata for a variety of purposes and activities, including supporting end user discovery of resources and collections. In order to successfully facilitate resource discovery, librarians must ensure that the metadata in their systems and discovery layers is standardised, accurate and as complete as possible; otherwise, their collections can be rendered essentially invisible to the user.
 
In order to improve metadata visibility and quality, librarians need initial and continuing technical training. Dr Diane Pennington will discuss how she provides training in metadata, cataloguing, and library systems in the MSc Information & Library Studies course at the University of Strathclyde’s iSchool. She will also provide an overview of her students’ broad range of applied and theoretical metadata research in order to illustrate the need for critically-informed, evidence-based metadata practice and implementation.
 
You will then hear from Emma Booth about the National Acquisitions Group Quality of Shelf-Ready Metadata Project, which collected data from UK academic libraries about their experiences with vendor-produced metadata for books and e-books. This case study serves to illustrate how poor quality metadata has a genuinely negative impact upon libraries and their users. It also demonstrates that the development and adoption of standards related to metadata quality is in the interests of everyone involved in the supply and use of library content because all stakeholders in the supply chain stand to benefit from ‘better’ richer metadata that can effectively bridge the gaps between information and communities.


Can open access play a role to fight fake news?
The subject of fake news is very topical. With social networks, and the advances of artificial intelligence, fake news are created and circulate faster and faster. Health and sciences are particularly nasty topics for fake news and the current context of Covid19 pandemic has made this crisis in the scientific information even more obvious.
Building on the results that were presented at the Open Science Conference 2020, a prototype that analyzes automatically open access research articles to help verify scientific claims was built.
This prototype takes a claim such as : “does coffee cause cancer ?” as an input, and builds three indicators to evaluate the truth behind this claim. The first indicator assesses whether the claim has been extensively studied. The second indicator is based on an NLP pipeline, and analyzes whether the articles generally agree or disagree with the claim. The third indicator is based on the retrieval and analysis of numerical values from the pertinent articles.
In this session, we would like to present our methodology, discuss the results that we have obtained and extend the discussion to the role that Open Access scholarly literature can play to fight false scientific claims and to help inform the public.

Connecting the dots: A cross-industry discussion on retracted research
Issues around the capturing, acknowledgement, classification, and tracking of retracted research are shared by academic institutions, publishing organizations, and the technology providers who support them. This cross-industry panel, moderated by a researcher and comprised of representatives from a non-profit publisher, an academic library, and a publishing platform provider, will examine shared obstacles and opportunities in processing, documenting, and communicating retractions, and will provide practical strategies for cross-industry collaboration. The panel will be moderated by Jodi Schneider, Assistant Professor, School of Information Sciences, University of Illinois at Urbana-Champaign. Jodi and other members of her research team have been spending significant time in 2020 bringing together representatives from all areas of the scholarly communication ecosystem as part of a Sloan-funded agenda-setting project. This moderated conversation will be one deliverable from a series of multiple workshops, interviews, and white papers.

Supporting sound and open science standards at the preprint stage
Preprint deposition and consumption has experienced exponential growth over the past year, particularly during the time of the COVID-19 pandemic. Problems with clarity, transparency, and reproducibility are pervasive in preprints and published articles alike. Given that preprints are often the earliest public-facing outputs of research, preprint platforms are in an ideal position to support, incentivize, and guide authors in the adoption of established standards for improving clarity, openness, and rigor in research reporting. This session discusses the misunderstanding and misrepresentation of some preprints during the pandemic and the unique role that preprint platforms can play in curbing disinformation and cultivating best practices at this critical point in the manuscript development process.
Next generation OA analytics: A case study 
A critical component in the development of sustainable funding models for OA is the ability to communicate impact in ways that are meaningful to a diverse range of internal and external stakeholders, including institutional partners, funders, and authors. While traditional paywall publishers can take advantage of industry standard COUNTER reports to communicate usage to subscribing libraries, no similar standard exists for OA content. Instead, many organizations are stuck with proxy metrics like sessions and page views that struggle to discriminate between robotic access and genuine engagement.

This session presents the results of an innovative project that builds on existing COUNTER metrics to develop more flexible reporting. Reporting goals include surfacing 3rd party engagement with OA content, the use of graphical report formats to improve accessibility, the ability to assemble custom data dashboards, and configurations that support the variant needs of diverse stakeholders. We’ll be sharing our understanding of who the stakeholders are, their differing needs for analytics, feedback on the reports shared, and lessons learned and areas for future research in this evolving area.

OA Book Metadata Standards to Support Usage Data Analytics 
This session will explore how current or new standards could address the challenges facing analytics that rely on the OA book usage data supply chain. Laura Ricci of Clarke & Esposito, co-author of the OA Book Supply Chain report produced with support from The Andrew W. Mellon Foundation for the global Exploring Open Access eBook Usage (OAeBU) data trust pilot project, will review the gaps and opportunities presented by the diversity of OA book stakeholders, open-access specific metadata elements and metadata standards. Lorraine Estelle, Director of Project COUNTER, and Brian O’Leary, Executive Director of the Book Industry Study Group, will then discuss how COUNTER and ONIX are positioned to address such issues in current or future releases. The session will conclude with all panelists reflecting upon where additional standards development may be needed.
Opening the ILS/LSP: Steps Towards a Fully Customizable Infrastructure: 

""Library Systems and services are at a point where they can be refined to meet the unique goals and needs of specific institutions. In spite of these impressive capabilities, library systems sometimes lack the flexibility afforded by the full interoperability across multiple libraries, vendors, and platforms necessary to ensure peak performance.

The traditional centerpiece of our systems environment, the ILS or LSP, is at a crossroads between allowing the kinds of systems interplay libraries need and the barriers created by contractual issues, technical barriers, and closed infrastructure. This session will highlight specific integration and interoperability concerns with commentary from members of the university, consortia, and vendor communities. The session will also emphasize the benefits of open systems for libraries and vendors, and how NISO could play a role by considering applicable standards through a dedicated working group.""

Expediting Access with a Browser Add-on: Open Source vs. Commercial Approach: 

""Providing quick and easy access to the library’s paid resources for researchers has been an ongoing challenge for libraries. One attractive means to achieve this is a web browser add-on, because it has the advantage of being available exactly when and where scholars and researchers spot and try to obtain the full-text content of research materials while online.

LibX, a free and open-source browser add-on developed at the Virginia Tech in 2005, was widely adopted and used by many libraries for more than a decade. But recently it has become defunct due to the lack of development efforts and general support from the wide library community. Now, some libraries have started licensing and implementing commercial products instead. Even though these commercial add-ons and LibX all aim to facilitate and expedite access, there are some distinct differences in their approaches.

In this session, I will (a) explore those differences in the open-source vs the commercial approach, drawing examples from LibX and Lean Library and (b) discuss what may be the ideal user interface design and the feature set for a browser extension that meets the users’ research needs, delivers great user experience, and advances the library’s goal at the same time.""
The digitalization of research has resulted in the development of many new types of media. How can we ensure that they're adequately preserved for future generations? What new tools and services will be needed? Who should be responsible? Two digital preservation experts will share their views on this important topic.
Thinking with GDPR (Andrew Cormack)

Europe's General Data Protection Regulation (GDPR) is sometimes portrayed as a complicated obstruction to doing what we want. This talk will look at the law behind the slogans: finding a rich source of guidance on how to develop the effective, privacy-respecting services that our customers and users - not just in Europe - need and expect. We'll look at the principles of Accountability, Necessity, Purpose Limitation and Information, and show how these help us design services that work better for users and providers. Specific examples will be taken from access management and data analytics.
mHealth Wearables and Apps: A changing privacy landscape (Christine Suver)
The use of wearables and smartphone apps to collect health-related data (mHealth) is a growing field. Wearable and health apps can continuously monitor our physical activity, sleep, heart rate, glucose levels, etc. They provide a rich data set that can supplement the data from occasional doctor's visit. But what are the privacy considerations of mHealth? We will explore global privacy principles, discuss the tension between anonymity and data utility, and propose ways to improve privacy notices/policies.


A look at China’s draft Personal Information Protection Law (Judy Bai)

With measures to ensure privacy getting prioritized worldwide, many countries have framed relevant laws and regulations on personal information protection. On October 21, 2020, China released its draft Personal Information Protection Law (PIPL) for public consultation.
When the draft PIPL gets passed, it’ll be China’s central and universal governing law on protecting personal information. While no definitive timeline has been set for the final law, we discuss some of the key features of this important piece of draft legislation and how businesses (based in China and those engaged in commercial interactions with people living in China) should prepare ahead to ensure data privacy compliance.

Publishing is the act of making data and information resources accessible to others. For effective and accurate scholarly communication developing shared vocabularies, terminologies and semantic assets are critical to the discovery and understanding of published resources, and help reduce ambiguity and increase interoperability. Many disciplines and organisations have local lists of vocabularies that serve multiple functions including enhancing discovery, annotation and description. With the globalisation of data and information resources, enabling a common understanding of any ‘concept’ used to describe or define a ‘thing’ in our world is becoming critical. There is a growing need to develop common vocabularies that can be used across broader communities and support harmonization of information both within and across disciplines and languages. But we also need to be able to communicate to users of any semantic asset, information about its sustainability, governance, authority, conditions of use, etc.

Preprints have been growing in popularity and visibility across many disciplines and communities — all the more so during the COVID19 pandemic, with rapid publication of early research on everything from vaccine development to economic impacts. While preprints have been widely adopted in some disciplines, there are still concerns about their quality and reliability, especially when they can be readily accessed by policy-makers and the public who may not yet fully understand their limitations. This session brings together three experts — from Africa, Latin America, and the US — to discuss the challenges and opportunities of preprints for researchers and non-researchers alike.
Research protocols and information standards have much in common, and both are essential components of research infrastructure. Protocols outline the process for a specific experiment or research project, while standards provide technical best practices for processes across the whole information ecosystem.  In this session, representatives from a publisher, a protocols platform, and an infrastructure organization will discuss their perspectives on the relationships between each — what's working, what isn't, and what more is needed?
Standardising and aligning journal and funder data policies 
An increasing number of publishers and journals are implementing policies that require or recommend that published articles be accompanied by the underlying research data. These policies are an important part of the shift toward reproducible research and contribute to the availability of research data for reuse. However there is wide variation between policies that makes it challenging for journal editors to develop and support a data policy, difficult for researchers in understanding and complying, and complex for infrastructure providers and research support staff to assist with data policy compliance.

There is clear benefit in a more standardised approach to policies. This has been the goal of international efforts led by the Research Data Alliance (RDA) Data Policy Standardisation and Implementation Interest Group resulting in the publication - and subsequent adoption - of a research data policy Framework to help journal editors and publishers navigate the creation or enhancement of a research data policy. There are also significant gains to be made in aligning journal and funder data policies with a project underway to address this challenge. This presentation will be given by co-chairs of the RDA Policy Standardisation group.

The Mystery of the Data and Software Citations...Why They Don’t Link to our Papers and Credit their Authors. 
Scientific data and software are being recognized more and more as first-class research products, preserved in appropriate repositories, and cited in papers where they were utilized to provide transparency, support reproducibility, and give credit. Yet the mechanisms that give automated credit and attribution are not being initiated consistently, nor is the ability to link these research products in a machine readable way. Important elements for this to happen include the citation itself and the persistent identifier (e.g., Digital Object Identifier) that is registered to the research object. This session will explain the current processes and examine the issues along with recommendations being proposed to help researchers get automated credit and attribution and support linking across research objects.


Join us for an update on the Seamless Access project (http://seamlessaccess.org) and a discussion of the issues surrounding the world of Federated Authentication.
NISO Recommended Practices for Video and Audio Metadata
Although many metadata standards address video and audio assets to some extent, a clear, commonly understood, and widely used set of properties is lacking. This is particularly problematic when assets are interchanged between their producers, such as educators, researchers, and documentarians, and their recipients, such as aggregators, libraries, and archives.

The NISO Video and Audio Metadata Working Group (VAMD) was formed to address this problem. Composed of technologists, librarians, aggregators, and publishers, the working group collaborated to develop a set of metadata properties deemed generally useful for the interchange of media assets. This includes bibliographic properties used for identification and citation, semantic properties useful for search and discovery, technical properties specific to media assets, and administrative properties to facilitate transactions.

This model is not intended to employ or replace existing metadata standards and vocabularies. Instead, the VAMD terms are a set of recommended properties to be expressed in the appropriate metadata scheme for specific parties, serving as a hub to facilitate interchange between parties that use different metadata schemes.

This session will present the current state of media asset interchange, the use cases addressed, and the results of a comparison with nine existing related standards, such as MARC and PBCore.

Introducing the Software Citation: Giving Credit Where Credit is Due
Research is commonly intense and complicated. The work to analyze a hypothesis involves building on the discovery of others and contributing new ideas and approaches. Sometimes researchers use tools designed for their community that are licensed or open source, and sometimes they must develop their own software or workflow in order to achieve their objectives. This software (aka code, model) is an important research object that supports transparency and reproducibility of our research. Without the software, it can be much harder or impossible to fully understand how the resulting data were generated and to have faith in the conclusions presented in the paper.

In this session we will share (via slides) 1) the guidance developed by the FORCE11 Software Citation Implementation Working Group for authors, developers, and journals; 2) how it supports and aligns to efforts happening around JATS/JATS4R; and 3) ways for the community to evaluate how well software citations, and necessary availability statements are being provided by authors.


Subsetting the JATS DTD – So What?
As scholarly publishers transition from manual, PDF-based workflows to automated, XML-based workflows, they will find important advantages in subsetting the JATS (Journal Article Tag Suite) DTD.

JATS was designed as a descriptive, not a prescriptive, DTD, so it allows for different ways to capture the same content and information. While this was necessary to accommodate widely divergent journal styles and legacy content, the looseness of the DTD poses problems for people building tools to bring XML forward in more automated publishing workflows. For example, building an online XML editor that allows all 11 ways of associating authors and affiliations would be unnecessarily complex and expensive to develop and maintain.

Fortunately, the JATS DTD was also designed to be easily subsetted. Content analysts can narrow the variations that developers are required to build to, making automated systems cheaper to develop and more robust. A well-designed subset that considers industry initiatives such as JATS4R also aids in making XML content more machine-readable and thus more discoverable.
In Pursuit of DEI in a Complex Landscape
Diversity, Equity and Inclusion (DEI) has become an important topic across our community. In this session we will discuss how this affects us as organizations, individually and as a community. We will start with short presentations where each panel member discusses the approach of their organization to implementing and promoting DEI. We finish off with a hopefully lively discussion with the audience around concerns, issues and the responsibility that each of us has. We also hope to identify gaps and opportunities for collaborative approaches.

Central to the discussion is the understanding that while each library has their own approach to DEI, Ex Libris takes a holistic approach to its DEI efforts by focusing on commitments within three areas: employment practices, community relations, and its services and products. A key part of the approach is the recognition that employees and industry partners are central to ensuring the organization designs, builds and maintains products that serve everyone equally. The Ex Libris presentation will give a focus on the collaboration with the community a key factor in the development of products that are developed and created for all.

More details to follow
Standards are an essential component of open research infrastructure, enabling interoperability, increasing efficiencies, reducing errors, and improving user experience. In order to be truly effective, standards must be developed and implemented globally, but how? This panel of experts from Australia, Ireland, and Mexico will discuss the challenges for their communities, and identify opportunities for the information community to work together globally to address them.
How does the infrastructure that supports our community get funded, and by whom? How can we ensure its long-term sustainability — and, by extension, that of the research tools and services that depend on it? What does sustainability even mean? Our speakers share their perspectives on these and other important questions.
In the Eye of the Beholder: What’s a Digital Preservation System Anyway? 
Cultural heritage organizations increasingly depend on digital platforms to support the curation, discovery, and long-term management of digital content. Yet, some of these systems and tools have been shown to have substantial sustainability challenges. The long-term stewardship of digital cultural materials depends not only on the technical resiliency of preservation systems, but on their financial and organizational sustainability. Funded by the Institute of Library and Museum Services (IMLS), Ithaka S+R is assessing how digital preservation systems are developed, deployed, and sustained through a series of case studies. We will share our initial findings related to design approaches of community-based and commercial digital preservation and curation initiatives, offer lessons learned, and propose alternative sustainability models for long-term maintenance and development. Although digital preservation is a well-established concept, it continues to be a situated and interpretive process, highly variable across different institutional settings. Rather than trying to adjudicate what does and does not “count” as digital preservation, we are studying the systems and services that cultural heritage organizations might use toward meeting digital preservation goals. In taking this broad approach, we hope to acknowledge the diversity of curatorial practices, priorities, and resource capacities that cultural heritage organizations bring to digital preservation work.

Addressing the pain in preservation
Almost everyone involved with digital information agrees that Digital preservation is a “good thing” and should be part of business as usual. However, implementing widespread preservation is often a very painful process… and many initiatives are stillborn. The pain points are many and varied (no resource; lack of trust; lack of understanding; poor interoperability; no connectivity to name but a few), but they’re also not new. These 20th century problems are all solvable - even more so now that we have access to 21st century technology.

And we intend to do just that… Well, to be more accurate, we intend use the hive mind of NISO attendees to map out pathways to solutions. Outlining the problems and then asking the fun questions. What could take away the pain? How? What need’s to be in place? What’s stopping us from doing it right now?

This session will start with a few short provocations to give a flavour of the problems and possible approaches to solving them. Then the discussion begins. Nothing is off limits. If you have a problem that needs a solution or a solution that’s looking for a problem to solve this is the session for you.

What does AI and machine learning mean for the future of intellectual property? Hear views from two expert lawyers — representing a library and a publishing perspective — and then join the discussion afterwards to share your own views.

In Pursuit of DEI in a Complex Landscape
Diversity, Equity and Inclusion (DEI) has become an important topic across our community. In this session we will discuss how this affects us as organizations, individually and as a community. We will start with short presentations where each panel member discusses the approach of their organization to implementing and promoting DEI. We finish off with a hopefully lively discussion with the audience around concerns, issues and the responsibility that each of us has. We also hope to identify gaps and opportunities for collaborative approaches.

Central to the discussion is the understanding that while each library has their own approach to DEI, Ex Libris takes a holistic approach to its DEI efforts by focusing on commitments within three areas: employment practices, community relations, and its services and products. A key part of the approach is the recognition that employees and industry partners are central to ensuring the organization designs, builds and maintains products that serve everyone equally. The Ex Libris presentation will give a focus on the collaboration with the community a key factor in the development of products that are developed and created for all.

More details to follow
The CRediT (Contributor Roles) taxonomy — already in use by a number of publishers and other organizations — is currently being formalized as an ANSI/NISO standard. It is valued by the community as a way of recognizing more of the many types of research contribution. But there are also still many challenges to be addressed, including the current focus on roles in the STEM publication process, which will be tackled in future phases. The speakers in this session will share their views on the current and future value of CRediT, how that can be maximized in future, and what challenges will need to be overcome for us to be successful.
FAIREST of them all
Findable, Accessible, Interoperable, Reusable. We all think FAIR data is a “good thing” don’t we. Who could be against something that, once stated, is so blindingly obvious? If it’s not findable, and usable then why are we spending time, money and resources keeping it? And it’s a great acronym as well. FAIR. It “does what it says on the tin"""". If you’ve got a good acronym you’re half way there when it comes to hearts and minds….

But I think FAIR doesn’t go far enough. There are major impact factors that need to be considered alongside FAIR. What about the costs involved in making information FAIR? Where does trust come in? What about the environmental impact?

Time for a new acronym that goes beyond simply “FAIR”.

What should that be? This talk will propose one possible extension to the concept...""

Leveraging FAIR Data Principles to Construct the CCC COVID Author Graph
A knowledge graph is an innovative and revealing visual exposition of data. Display of such data is a powerful way to explore connections and query relationships among different entities, but only if the underlying data is of high quality.

The global research community’s effort to fight COVID-19 has led to an explosive increase of manuscripts submitted to peer-reviewed journals. With this influx of submissions, publishers have recognized limitations in the existing methods for identifying appropriate peer reviewers to validate the accuracy, impact, and value of these manuscripts.

To address this challenge, CCC developed the COVID Author Graph, a knowledge graph highlighting peer review data focused on authors who have published in areas with special attention to coronaviruses, SARS, MERS, SARS-CoV-2, and COVID-19. This new approach helps publishers leverage data to aid in the accelerated identification of peer reviewers.

During this session, presenters will illustrate how the FAIR data principles –Findability, Accessibility, Interoperability, and Reuse of digital assets – served as a trusted foundation for building the COVID Author Graph. They will share key learnings with an emphasis on how reliable levels of data quality that FAIR principles make possible enable more sophisticated analysis and help organizations derive actionable business insights.

FAIR (meta)data - low-hanging fruit for scholarly publishers (Brian Cody)
Drawing on experience working with journal publishers to collect/enhance/format metadata, this session section overviews FAIR principles and shares concrete steps for beginning your FAIR journey.
Unlocking JSTOR & Portico for Text Analysis & Pedagogy 
Text analytics, or the process of deriving new information from pattern and trend analysis of the written word, is making a transformative impact in the social sciences and humanities. Sadly, there is a massive hurdle facing those eager to unleash its power: the coding skills and statistical knowledge that text mining requires can take years to develop; moreover, access rights to high quality datasets for text mining are often cost prohibitive and may include further license negotiations. Over the past several years, JSTOR’s Data for Research (DfR) has addressed some of these issues, providing metadata and full-text datasets for its archival content. In January, ITHAKA – the organizational home of JSTOR and Portico – announced a completely new platform that incorporates DfR’s features, as well as adding visualization tools and an integrated analytics lab for learning and teaching text analysis. At NISO Plus, key members of the ITHAKA team will describe the design of this new multifaceted platform and highlight how its components can intersect with the needs of librarians, publishers, educators, students, and faculty. The presenters will emphasize the platform’s hosted analytics lab, where librarians and faculty can create, adapt, and adopt text mining analysis code that works with publisher content for data science instructional sessions.
Collections as Data: From Digital Library to Open Datasets
Collections as Data “aims to encourage computational use of digitized and born digital collections” (https://collectionsasdata.github.io/statement/), but how do you get started developing a Collections as Data program, especially with existing staff and technology resources? The Digital Library Services department at the University of Utah will share their practical approach to Collections as Data, ranging from releasing oral history data for text mining to developing a metadata transcription project to create a new historical dataset of mining labor employment records. We will also discuss developing partnerships with digital humanists on campus and the potential uses of the collections we’ve released to the public. We will also show how analyzing digital collections with a digital humanities approach can provide new insights into potential new processes for descriptive metadata creation.
Public Humanities: Challenges and Opportunities
Directors of leading humanities associations and initiatives discuss the impact and the challenges of public humanities, as an idea and as a method of study. What is public humanities, and why is it important? What forms of scholarly and creative output does it encompass? How can it be recognized and supported at institutional, national, and international levels? How do humanities scholars engage with a broad, diverse audience? How might these encounters change the nature and course of humanities study?

Join Modern Language Association Executive Director Paula Krebs, University of Virginia President's Commission on Slavery and the University Chair Kirt von Daacke, and University of Illinois Chicago Engaged Humanities Initiative Director Ellen McClure for a roundtable discussion and lively Q&A.
A two-part session focusing on identifiers, metadata, and using them to make connections!

Part 1: Hocus pocus: Mixing open identifiers into metadata makes connections between research work
Journal articles don’t exist in a vacuum. There is increasing awareness of the need to reliably connect articles, data, affiliation, contributor and funding information to expose trends and opportunities in the research ecosystem, enable reliable streamlined reporting to key stakeholders and to ensure transparency and trust in research.
To support this, metadata for research objects can’t exist in a vacuum either. It needs to reflect these relationships and incorporate a range of persistent identifiers to do so. And it needs to be open so that it can populate through different systems. DataCite, ROR, Crossref and ORCID have been working together to look at how relationships are asserted between articles, data and other content types, and what connecting research objects to other identifiers helps us see: which outputs resulted from a research grant, which institutions are particularly strong in which areas, where and how are openly available data and software used, and who researchers are collaborating with. We can also use these existing relationships to infer further connections via tools like the PID Graph and the community can (re)use our open metadata to build new services and tools.

Join representatives from ORCID, DataCite, ROR and Crossref as we share the kind of information that’s already available, what work we still have to do and our plans to enhance this in collaboration with our communities.""

Part 2: Data visualization
In a crowded digital media landscape the first question many authors, editors and publishers ask is, how can I make sure that my research is widely noticed and well understood? The answer, say Deb Wyatt and Donald Samulack of Cactus Communications, often lies in visualization.
As the research communication landscape changes, we continue to unlock more efficient and impactful ways to communicate research in highly visual and engaging ways. Video and graphical content formats are now core components of research publishing: just as important for the understanding of science and scholarship as published articles and monographs.
The second critical question is, how do we share research content reliably and accurately, in line with established community standards of rigor, ethics and integrity? As we embrace new formats for research communication, the challenge is to ensure that we continue to apply the same standards of rigor, transparency and FAIR principles to this derivative content.

Concluding this session, Dario Rodighiero will present his data visualization of ... NISO Plus 2021! He notes that, in sociology, the digital traces are these data that humans leave behind during daily activities. Open data and identifiers are not only instruments to make science more transparent and accessible, but they also represent a meaningful way to study the behavior of scientists. This talk aims to present how these digital traces can be used to observe the academic environment.
Digitalization is enabling the creation of all sorts of new and innovative forms of scholarly publishing. In this
session, experts from three very different publishing organizations will share examples and discuss what they think the future holds.
This talk will introduce a challenging research program, the Japan Science & Technology Agency's Moonshot Goal 1 on “Realization of a society in which human beings can be free from limitations of body, brain, space, and time by 2050.” The program was determined by the Japanese Plenary session of Council for Science, Technology and Innovation (“CSTI”), the Ministry of Education, Culture, Sports, Science and Technology (“MEXT”), and JST. It consists of three human-centered R&D projects on Cybernetic Avatars, which will support the creation of cloud infrastructure and core technologies that enable a diverse range of remotely operated social human activities. They will help us adapt and adjust to a new human-centered ‘cybernetic avatar life,’ where these avatars will augment the physical, cognitive, and perceptual capabilities of people from a range of socio-economic and other backgrounds. The cybernetic avatars will be developed from the viewpoint of not only of the providers, but also the users in future society. The project will also conduct basic research on human stress caused by the avatars — and how to relieve this — while taking into account ethical, legal, social, and economic (ELSE) issues.
Indigenous Knowledge affects all sectors of the global economy such as health, food and agriculture, architecture, music, etc...the list goes on and on. But this knowledge is going extinct because it has not been harnessed and preserved.

How should we as information personnel be treating this kind of knowledge? Can we document them? Can we apply standards to them? how do we treat such knowledge and harness them for global development
NISO Plus 2021

Keynote: The Japan Science

54:48
This talk will introduce a challenging research program, the Japan Science & Technology Agency's Moonshot Goal 1 on “Realization of a society in which human beings can be free from limitations of body, brain, space, and time by 2050.” The program was determined by the Japanese Plenary session of Council for Science, Technology and Innovation (“CSTI”), the Ministry of Education, Culture, Sports, Science and Technology (“MEXT”), and JST. It consists of three human-centered R&D projects on Cybernetic Avatars, which will support the creation of cloud infrastructure and core technologies that enable a diverse range of remotely operated social human activities. They will help us adapt and adjust to a new human-centered ‘cybernetic avatar life,’ where these avatars will augment the physical, cognitive, and perceptual capabilities of people from a range of socio-economic and other backgrounds. The cybernetic avatars will be developed from the viewpoint of not only of the providers, but also the users in future society. The project will also conduct basic research on human stress caused by the avatars — and how to relieve this — while taking into account ethical, legal, social, and economic (ELSE) issues.

 
NISO KBART Validator App 
How can we enhance trust in the quality of KBART files? Endorsement process is one way. Automated validation could be another way.

Content providers can have their KBART files endorsed by NISO. But the endorsement process consists of manual checks and thus can be a long process, with multiple file revisions and much back-and-forth communication required. The KBART Standing Committee aims to formalize and speed up its endorsement process by automating a number of validation tasks, thus providing more time to analyze parts of the files that are trickier to check automatically. Automated validation could also occur upstream, by content providers checking their KBART files post-production, or downstream, by knowledge bases checking KBART files before ingestion. What if all these scenarios relied on a shared tool?

The NISO KBART Validator app has two goals:
* Short term : ease NISO’s endorsement process by automating file checks that can be automated
* Long term : provide the community with a common validator app

The NISO KBART Validator app is currently under development. This session will provide a demo of the tool and insights about its roadmap. We want this app to be community powered: we’ll take time in this session to discuss where you and your organization could help, with or without developers.""

The Package ID: Seeking Sanity through Standards
Content providers often bundle offerings into pre-set collections by subject, year, or some other scope so libraries can select packages that best fit their needs. Publishers also sell individual journals and books, allowing libraries to select content title-by-title. These options provide an effective approach to selling content. However, they produce a confusing, ever-changing tangle in knowledgebases.

Currently, package names are used identifiers, which introduces challenges for knowledgebase providers and librarians. Marketing pages, access platforms, licenses, invoices, and knowledgebases may all use different names for the same packages. Additionally, package names change, differ on various systems, and different bundles often have very similar names.

Knowledgebase providers load the content bundles to serve as the basis for discovery, linking, and ERM processes. Using package names as the identifier complicates always uniquely identify collections. The problem also affects automatic updates to the knowledgebase, in general, or within a specific library’s holdings. Likewise, librarians have a difficult time determining which of the many similar-sounding packages matches their licensed content.

Ultimately, all parties want to ensure that the licensed content is represented and enabled in knowledgebases for discovery and linking. Consistent unique identifiers may offer a way to improve efficiency and reduce confusion.
Research begins and ends with the library
Emerging options for open journal publishing ""While journal publishing workflows reach the same ends whether inside of a library or a publishing house—namely, supporting research outputs—libraries are ideally positioned to enhance open research infrastructure by providing end-to-end service for researchers. Libraries already support institutional repositories and provide access to millions of titles; meanwhile librarians offer support for collecting, analyzing, and visualizing scholarly data sets. By publishing journals and other materials, libraries participate in another key part of the research lifecycle. Additionally, most library publishers follow the platinum open access publishing model (i.e., free of costs to both authors and readers), which can help amplify underrepresented voices in scholarly publishing.

Despite the benefits libraries can offer to researchers and societies by publishing their scholarship, challenges abound with building out the infrastructure for executing publishing workflows. The Library Publishing Coalition seeks to fill this gap by helping libraries develop, maintain, and improve publishing services, workflows, and infrastructure. At this point, library publishing is a work in progress, but the foundation has been laid, and libraries have the potential to be a formidable force in the open research movement by taking on the publishing of open access journals.

From creation to consumable knowledge: supporting research workflows in an open infrastructure
It goes without saying that the library fulfills a key role delivering those services to users in support of the acquisition and dissemination of knowledge. Certainly today, as humanity seeks to address some of its most pressing scientific challenges, open and reliable access to information takes center stage. Where and how then can the role of the library evolve to support research and speed the time from its creation to consumable, living knowledge?

This discussion will focus on ways in which libraries and vendors alike can support research in an open infrastructure. The presenter will look at researcher needs to conduct and share their work, while considering how the library – on its part – can best collect, preserve, disseminate and manage the research. Specific attention will be paid to open collaboration platforms in support of open science. And, the presenter will discuss how open source solutions as well may best support evolving needs for innovation in library workflows and the delivery of new services to users in support of research, teaching and learning.
NISO Plus 2021

Lightning Talks

01:06:06
In this session, presenters will have five minutes to share an elevator pitch for their product or project. Afterwards, the audience will have the opportunity to both ask questions to better understand the projects, and will also be able to suggest tools that they would like to see in the information ecosystem.
Linked data is central to the semantic web and, therefore, to the future of information sharing. Our two expert speakers will share their views on the current state of play with linked data in the information community, and what they predict the future will bring.
NISO Plus 2021

Metadata and discovery

55:37
What You Can Do to Help Promote Transparency in Discovery -- and Why
NISO recently updated the Open Discovery Initiative Recommended Practice (https://www.niso.org/publications/rp-19-2020-odi), which outlines best practices for working with library discovery services. It defines ways for libraries to assess the level of content provider participation; streamlines the process by which libraries, content providers and discovery service providers work together; defines models for “fair” linking; and suggests usage statistics that should be collected for libraries and for content providers. The recommendations in this document, created by members of the Open Discovery Initiative Standing Committee, enable libraries, discovery service providers, and content providers to work together to the full extent of their abilities—providing the most effective and rich experience to end users.

In this presentation, you will learn about the Open Discovery Initiative, what changes were included in the 2020 revision of the ODI Recommended Practice, and delve more deeply into several areas: free-to-read content, fair linking, and the key elements included in the newly added library conformance statements.

Better metadata makes a difference
Libraries create, ingest and use metadata for a variety of purposes and activities, including supporting end user discovery of resources and collections. In order to successfully facilitate resource discovery, librarians must ensure that the metadata in their systems and discovery layers is standardised, accurate and as complete as possible; otherwise, their collections can be rendered essentially invisible to the user.
 
In order to improve metadata visibility and quality, librarians need initial and continuing technical training. Dr Diane Pennington will discuss how she provides training in metadata, cataloguing, and library systems in the MSc Information & Library Studies course at the University of Strathclyde’s iSchool. She will also provide an overview of her students’ broad range of applied and theoretical metadata research in order to illustrate the need for critically-informed, evidence-based metadata practice and implementation.
 
You will then hear from Emma Booth about the National Acquisitions Group Quality of Shelf-Ready Metadata Project, which collected data from UK academic libraries about their experiences with vendor-produced metadata for books and e-books. This case study serves to illustrate how poor quality metadata has a genuinely negative impact upon libraries and their users. It also demonstrates that the development and adoption of standards related to metadata quality is in the interests of everyone involved in the supply and use of library content because all stakeholders in the supply chain stand to benefit from ‘better’ richer metadata that can effectively bridge the gaps between information and communities.


The Public Library Data Alliance (PLDA) is the implementing organization of the Measures that Matter project (https://measuresthatmatter.net/) and seeks to operationalize the goals of that project, primarily to "collaboratively develop and implement a National Action Plan that will allow libraries to more effectively turn data into useable information to demonstrate the value of library collections and services nation-wide." Come join some of the founding members of the PLDA in a discussion of the project and to help determine the areas of needed focus in this area.
Miles Conrad was the founder of the National Federation of Abstracting and Indexing Services (NFAIS), and this award was established in 1965, in his memory. During the 1960s, Conrad encouraged NFAIS members — scholarly societies and government agencies — to work collaboratively in support of the space exploration program, in order to enhance the speed with which scientific knowledge could be disseminated, discovered, and acted upon. In the years that followed, NFAIS expanded its cross-disciplinary membership and played an important role in the development of online information services and resources, before merging with National Information Standards Organization (NISO) in 2019. NISO’s vision, of a world where all can benefit from the unfettered exchange of information, reflects the aims of both organizations; in awarding this prize, we are proud to continue recognizing the contributions of those whose lifetime achievements have moved our community forward.
Can open access play a role to fight fake news?
The subject of fake news is very topical. With social networks, and the advances of artificial intelligence, fake news are created and circulate faster and faster. Health and sciences are particularly nasty topics for fake news and the current context of Covid19 pandemic has made this crisis in the scientific information even more obvious.
Building on the results that were presented at the Open Science Conference 2020, a prototype that analyzes automatically open access research articles to help verify scientific claims was built.
This prototype takes a claim such as : “does coffee cause cancer ?” as an input, and builds three indicators to evaluate the truth behind this claim. The first indicator assesses whether the claim has been extensively studied. The second indicator is based on an NLP pipeline, and analyzes whether the articles generally agree or disagree with the claim. The third indicator is based on the retrieval and analysis of numerical values from the pertinent articles.
In this session, we would like to present our methodology, discuss the results that we have obtained and extend the discussion to the role that Open Access scholarly literature can play to fight false scientific claims and to help inform the public.

Connecting the dots: A cross-industry discussion on retracted research
Issues around the capturing, acknowledgement, classification, and tracking of retracted research are shared by academic institutions, publishing organizations, and the technology providers who support them. This cross-industry panel, moderated by a researcher and comprised of representatives from a non-profit publisher, an academic library, and a publishing platform provider, will examine shared obstacles and opportunities in processing, documenting, and communicating retractions, and will provide practical strategies for cross-industry collaboration. The panel will be moderated by Jodi Schneider, Assistant Professor, School of Information Sciences, University of Illinois at Urbana-Champaign. Jodi and other members of her research team have been spending significant time in 2020 bringing together representatives from all areas of the scholarly communication ecosystem as part of a Sloan-funded agenda-setting project. This moderated conversation will be one deliverable from a series of multiple workshops, interviews, and white papers.

Supporting sound and open science standards at the preprint stage
Preprint deposition and consumption has experienced exponential growth over the past year, particularly during the time of the COVID-19 pandemic. Problems with clarity, transparency, and reproducibility are pervasive in preprints and published articles alike. Given that preprints are often the earliest public-facing outputs of research, preprint platforms are in an ideal position to support, incentivize, and guide authors in the adoption of established standards for improving clarity, openness, and rigor in research reporting. This session discusses the misunderstanding and misrepresentation of some preprints during the pandemic and the unique role that preprint platforms can play in curbing disinformation and cultivating best practices at this critical point in the manuscript development process.
NISO Plus 2021

NISO Awards 2021

33:48
The Ann Marie Cunningham Service award was established in 1994 to honor NFAIS members who routinely went above and beyond the normal call of duty to serve the organization. It is named after Ann Marie Cunningham who, while working with abstracting and information services such as Biological Abstracts and the Institute for Scientific Information, worked tirelessly as a dedicated NFAIS volunteer. She ultimately served as the NFAIS Executive Director from 1991 to 1994 when she died unexpectedly. NISO is pleased to continue to present this award to honor NISO volunteers who have shown the same sort of commitment to serving our organization. Starting in 1983, NFAIS honored individuals who made significant contributions to NFAIS, and subsequently retired from the information services field, by granting them a lifetime membership of the organization. NISO has also occasionally recognized individuals who have made significant contributions to our organization by honoring them in this way, and will continue to do so. Join us to find out who is honored with these awards!
NISO Plus 2021

NISO Plus closing

16:52
Closing thoughts and goodbyes from NISO Plus 2021
NISO Plus 2021

NISO Update

01:18:01
NISO projects are numerous, diverse in output, coverage, and participation, and ACTIVE!
NISO Plus 2021

Open access and analytics

34:19
Next generation OA analytics: A case study 
A critical component in the development of sustainable funding models for OA is the ability to communicate impact in ways that are meaningful to a diverse range of internal and external stakeholders, including institutional partners, funders, and authors. While traditional paywall publishers can take advantage of industry standard COUNTER reports to communicate usage to subscribing libraries, no similar standard exists for OA content. Instead, many organizations are stuck with proxy metrics like sessions and page views that struggle to discriminate between robotic access and genuine engagement.

This session presents the results of an innovative project that builds on existing COUNTER metrics to develop more flexible reporting. Reporting goals include surfacing 3rd party engagement with OA content, the use of graphical report formats to improve accessibility, the ability to assemble custom data dashboards, and configurations that support the variant needs of diverse stakeholders. We’ll be sharing our understanding of who the stakeholders are, their differing needs for analytics, feedback on the reports shared, and lessons learned and areas for future research in this evolving area.

OA Book Metadata Standards to Support Usage Data Analytics 
This session will explore how current or new standards could address the challenges facing analytics that rely on the OA book usage data supply chain. Laura Ricci of Clarke & Esposito, co-author of the OA Book Supply Chain report produced with support from The Andrew W. Mellon Foundation for the global Exploring Open Access eBook Usage (OAeBU) data trust pilot project, will review the gaps and opportunities presented by the diversity of OA book stakeholders, open-access specific metadata elements and metadata standards. Lorraine Estelle, Director of Project COUNTER, and Brian O’Leary, Executive Director of the Book Industry Study Group, will then discuss how COUNTER and ONIX are positioned to address such issues in current or future releases. The session will conclude with all panelists reflecting upon where additional standards development may be needed.
Opening the ILS/LSP: Steps Towards a Fully Customizable Infrastructure: 
""Library Systems and services are at a point where they can be refined to meet the unique goals and needs of specific institutions. In spite of these impressive capabilities, library systems sometimes lack the flexibility afforded by the full interoperability across multiple libraries, vendors, and platforms necessary to ensure peak performance.

The traditional centerpiece of our systems environment, the ILS or LSP, is at a crossroads between allowing the kinds of systems interplay libraries need and the barriers created by contractual issues, technical barriers, and closed infrastructure. This session will highlight specific integration and interoperability concerns with commentary from members of the university, consortia, and vendor communities. The session will also emphasize the benefits of open systems for libraries and vendors, and how NISO could play a role by considering applicable standards through a dedicated working group.""

Expediting Access with a Browser Add-on: Open Source vs. Commercial Approach: 

""Providing quick and easy access to the library’s paid resources for researchers has been an ongoing challenge for libraries. One attractive means to achieve this is a web browser add-on, because it has the advantage of being available exactly when and where scholars and researchers spot and try to obtain the full-text content of research materials while online.

LibX, a free and open-source browser add-on developed at the Virginia Tech in 2005, was widely adopted and used by many libraries for more than a decade. But recently it has become defunct due to the lack of development efforts and general support from the wide library community. Now, some libraries have started licensing and implementing commercial products instead. Even though these commercial add-ons and LibX all aim to facilitate and expedite access, there are some distinct differences in their approaches.

In this session, I will (a) explore those differences in the open-source vs the commercial approach, drawing examples from LibX and Lean Library and (b) discuss what may be the ideal user interface design and the feature set for a browser extension that meets the users’ research needs, delivers great user experience, and advances the library’s goal at the same time.""
Big Tech likes to boast about how good it is at manipulating us and oh, they are! But the cover manipulation - the psychological tricks they sell to advertisers and politicians - are thinly supported by the evidence and rely on self-serving, internal research that is largely indistinguishable from marketing puffery. On the other hand, there are plenty of ways that Big Tech provably alters our behavior: Facebook locks all your friends in its walled garden so you need a Facebook account to talk to your friends. Apple locks apps in its walled garden so you can't access apps that Apple doesn't like. Google pays billions to make it the default search on every platform, so any time you ask a question, they're the ones giving you an answer. All of this manipulation doesn't require psychological or technological tricks - all it needs is monopoly, and for the first time in 40 years, lawmakers are getting serious about fighting monopolies. Using anti-monopoly laws to break Big Tech's power may sound like a win: but if it turns out that Big Tech's claims to psychological manipulation mastery are true, then won't breaking Big Tech up just create dozens of little, reckless firms that have access to these devastating psychological weapons? In other words: if Big Tech is a comet headed at our planet threatening all life, then won't breaking it up turn it into a devastating meteor shower that we can't hope to survive?
The digitalization of research has resulted in the development of many new types of media. How can we ensure that they're adequately preserved for future generations? What new tools and services will be needed? Who should be responsible? Two digital preservation experts will share their views on this important topic.
NISO Plus 2021

Privacy: global perspectives

41:11
Thinking with GDPR (Andrew Cormack)

Europe's General Data Protection Regulation (GDPR) is sometimes portrayed as a complicated obstruction to doing what we want. This talk will look at the law behind the slogans: finding a rich source of guidance on how to develop the effective, privacy-respecting services that our customers and users - not just in Europe - need and expect. We'll look at the principles of Accountability, Necessity, Purpose Limitation and Information, and show how these help us design services that work better for users and providers. Specific examples will be taken from access management and data analytics.
mHealth Wearables and Apps: A changing privacy landscape (Christine Suver)
The use of wearables and smartphone apps to collect health-related data (mHealth) is a growing field. Wearable and health apps can continuously monitor our physical activity, sleep, heart rate, glucose levels, etc. They provide a rich data set that can supplement the data from occasional doctor's visit. But what are the privacy considerations of mHealth? We will explore global privacy principles, discuss the tension between anonymity and data utility, and propose ways to improve privacy notices/policies.


A look at China’s draft Personal Information Protection Law (Judy Bai)

With measures to ensure privacy getting prioritized worldwide, many countries have framed relevant laws and regulations on personal information protection. On October 21, 2020, China released its draft Personal Information Protection Law (PIPL) for public consultation.
When the draft PIPL gets passed, it’ll be China’s central and universal governing law on protecting personal information. While no definitive timeline has been set for the final law, we discuss some of the key features of this important piece of draft legislation and how businesses (based in China and those engaged in commercial interactions with people living in China) should prepare ahead to ensure data privacy compliance.
Publishing is the act of making data and information resources accessible to others. For effective and accurate scholarly communication developing shared vocabularies, terminologies and semantic assets are critical to the discovery and understanding of published resources, and help reduce ambiguity and increase interoperability. Many disciplines and organisations have local lists of vocabularies that serve multiple functions including enhancing discovery, annotation and description. With the globalisation of data and information resources, enabling a common understanding of any ‘concept’ used to describe or define a ‘thing’ in our world is becoming critical. There is a growing need to develop common vocabularies that can be used across broader communities and support harmonization of information both within and across disciplines and languages. But we also need to be able to communicate to users of any semantic asset, information about its sustainability, governance, authority, conditions of use, etc.

Preprints have been growing in popularity and visibility across many disciplines and communities — all the more so during the COVID19 pandemic, with rapid publication of early research on everything from vaccine development to economic impacts. While preprints have been widely adopted in some disciplines, there are still concerns about their quality and reliability, especially when they can be readily accessed by policy-makers and the public who may not yet fully understand their limitations. This session brings together three experts — from Africa, Latin America, and the US — to discuss the challenges and opportunities of preprints for researchers and non-researchers alike.
Research protocols and information standards have much in common, and both are essential components of research infrastructure. Protocols outline the process for a specific experiment or research project, while standards provide technical best practices for processes across the whole information ecosystem.  In this session, representatives from a publisher, a protocols platform, and an infrastructure organization will discuss their perspectives on the relationships between each — what's working, what isn't, and what more is needed?
Standardising and aligning journal and funder data policies 
An increasing number of publishers and journals are implementing policies that require or recommend that published articles be accompanied by the underlying research data. These policies are an important part of the shift toward reproducible research and contribute to the availability of research data for reuse. However there is wide variation between policies that makes it challenging for journal editors to develop and support a data policy, difficult for researchers in understanding and complying, and complex for infrastructure providers and research support staff to assist with data policy compliance.

There is clear benefit in a more standardised approach to policies. This has been the goal of international efforts led by the Research Data Alliance (RDA) Data Policy Standardisation and Implementation Interest Group resulting in the publication - and subsequent adoption - of a research data policy Framework to help journal editors and publishers navigate the creation or enhancement of a research data policy. There are also significant gains to be made in aligning journal and funder data policies with a project underway to address this challenge. This presentation will be given by co-chairs of the RDA Policy Standardisation group.

The Mystery of the Data and Software Citations...Why They Don’t Link to our Papers and Credit their Authors. 
Scientific data and software are being recognized more and more as first-class research products, preserved in appropriate repositories, and cited in papers where they were utilized to provide transparency, support reproducibility, and give credit. Yet the mechanisms that give automated credit and attribution are not being initiated consistently, nor is the ability to link these research products in a machine readable way. Important elements for this to happen include the citation itself and the persistent identifier (e.g., Digital Object Identifier) that is registered to the research object. This session will explain the current processes and examine the issues along with recommendations being proposed to help researchers get automated credit and attribution and support linking across research objects.


Join us for an update on the Seamless Access project (http://seamlessaccess.org) and a discussion of the issues surrounding the world of Federated Authentication.
NISO Plus 2021

Solving problems with standards

49:26
NISO Recommended Practices for Video and Audio Metadata
Although many metadata standards address video and audio assets to some extent, a clear, commonly understood, and widely used set of properties is lacking. This is particularly problematic when assets are interchanged between their producers, such as educators, researchers, and documentarians, and their recipients, such as aggregators, libraries, and archives.

The NISO Video and Audio Metadata Working Group (VAMD) was formed to address this problem. Composed of technologists, librarians, aggregators, and publishers, the working group collaborated to develop a set of metadata properties deemed generally useful for the interchange of media assets. This includes bibliographic properties used for identification and citation, semantic properties useful for search and discovery, technical properties specific to media assets, and administrative properties to facilitate transactions.

This model is not intended to employ or replace existing metadata standards and vocabularies. Instead, the VAMD terms are a set of recommended properties to be expressed in the appropriate metadata scheme for specific parties, serving as a hub to facilitate interchange between parties that use different metadata schemes.

This session will present the current state of media asset interchange, the use cases addressed, and the results of a comparison with nine existing related standards, such as MARC and PBCore.

Introducing the Software Citation: Giving Credit Where Credit is Due
Research is commonly intense and complicated. The work to analyze a hypothesis involves building on the discovery of others and contributing new ideas and approaches. Sometimes researchers use tools designed for their community that are licensed or open source, and sometimes they must develop their own software or workflow in order to achieve their objectives. This software (aka code, model) is an important research object that supports transparency and reproducibility of our research. Without the software, it can be much harder or impossible to fully understand how the resulting data were generated and to have faith in the conclusions presented in the paper.

In this session we will share (via slides) 1) the guidance developed by the FORCE11 Software Citation Implementation Working Group for authors, developers, and journals; 2) how it supports and aligns to efforts happening around JATS/JATS4R; and 3) ways for the community to evaluate how well software citations, and necessary availability statements are being provided by authors.


Subsetting the JATS DTD – So What?
As scholarly publishers transition from manual, PDF-based workflows to automated, XML-based workflows, they will find important advantages in subsetting the JATS (Journal Article Tag Suite) DTD.

JATS was designed as a descriptive, not a prescriptive, DTD, so it allows for different ways to capture the same content and information. While this was necessary to accommodate widely divergent journal styles and legacy content, the looseness of the DTD poses problems for people building tools to bring XML forward in more automated publishing workflows. For example, building an online XML editor that allows all 11 ways of associating authors and affiliations would be unnecessarily complex and expensive to develop and maintain.

Fortunately, the JATS DTD was also designed to be easily subsetted. Content analysts can narrow the variations that developers are required to build to, making automated systems cheaper to develop and more robust. A well-designed subset that considers industry initiatives such as JATS4R also aids in making XML content more machine-readable and thus more discoverable.
Standards are an essential component of open research infrastructure, enabling interoperability, increasing efficiencies, reducing errors, and improving user experience. In order to be truly effective, standards must be developed and implemented globally, but how? This panel of experts from Australia, Ireland, and Mexico will discuss the challenges for their communities, and identify opportunities for the information community to work together globally to address them.
We're all familiar with the problems that occur when standards fail to take into account diversity — from seatbelts that don't protect women as well as they protect men, to buildings that are only accessible to those with full mobility. Information standards, likewise, fail if they aren't created by, for, and with the community that uses them. This session will look at some existing standards that support DEI, and the speakers will also discuss what more we need to do to ensure that future standards are as diverse, inclusive, and equitable as they must be to succeed.
How does the infrastructure that supports our community get funded, and by whom? How can we ensure its long-term sustainability — and, by extension, that of the research tools and services that depend on it? What does sustainability even mean? Our speakers share their perspectives on these and other important questions.
In the Eye of the Beholder: What’s a Digital Preservation System Anyway? 
Cultural heritage organizations increasingly depend on digital platforms to support the curation, discovery, and long-term management of digital content. Yet, some of these systems and tools have been shown to have substantial sustainability challenges. The long-term stewardship of digital cultural materials depends not only on the technical resiliency of preservation systems, but on their financial and organizational sustainability. Funded by the Institute of Library and Museum Services (IMLS), Ithaka S+R is assessing how digital preservation systems are developed, deployed, and sustained through a series of case studies. We will share our initial findings related to design approaches of community-based and commercial digital preservation and curation initiatives, offer lessons learned, and propose alternative sustainability models for long-term maintenance and development. Although digital preservation is a well-established concept, it continues to be a situated and interpretive process, highly variable across different institutional settings. Rather than trying to adjudicate what does and does not “count” as digital preservation, we are studying the systems and services that cultural heritage organizations might use toward meeting digital preservation goals. In taking this broad approach, we hope to acknowledge the diversity of curatorial practices, priorities, and resource capacities that cultural heritage organizations bring to digital preservation work.

Addressing the pain in preservation
Almost everyone involved with digital information agrees that Digital preservation is a “good thing” and should be part of business as usual. However, implementing widespread preservation is often a very painful process… and many initiatives are stillborn. The pain points are many and varied (no resource; lack of trust; lack of understanding; poor interoperability; no connectivity to name but a few), but they’re also not new. These 20th century problems are all solvable - even more so now that we have access to 21st century technology.

And we intend to do just that… Well, to be more accurate, we intend use the hive mind of NISO attendees to map out pathways to solutions. Outlining the problems and then asking the fun questions. What could take away the pain? How? What need’s to be in place? What’s stopping us from doing it right now?

This session will start with a few short provocations to give a flavour of the problems and possible approaches to solving them. Then the discussion begins. Nothing is off limits. If you have a problem that needs a solution or a solution that’s looking for a problem to solve this is the session for you.
What does AI and machine learning mean for the future of intellectual property? Hear views from two expert lawyers — representing a library and a publishing perspective — and then join the discussion afterwards to share your own views.
In Pursuit of DEI in a Complex Landscape
Diversity, Equity and Inclusion (DEI) has become an important topic across our community. In this session we will discuss how this affects us as organizations, individually and as a community. We will start with short presentations where each panel member discusses the approach of their organization to implementing and promoting DEI. We finish off with a hopefully lively discussion with the audience around concerns, issues and the responsibility that each of us has. We also hope to identify gaps and opportunities for collaborative approaches.

Central to the discussion is the understanding that while each library has their own approach to DEI, Ex Libris takes a holistic approach to its DEI efforts by focusing on commitments within three areas: employment practices, community relations, and its services and products. A key part of the approach is the recognition that employees and industry partners are central to ensuring the organization designs, builds and maintains products that serve everyone equally. The Ex Libris presentation will give a focus on the collaboration with the community a key factor in the development of products that are developed and created for all.

More details to follow
The CRediT (Contributor Roles) taxonomy — already in use by a number of publishers and other organizations — is currently being formalized as an ANSI/NISO standard. It is valued by the community as a way of recognizing more of the many types of research contribution. But there are also still many challenges to be addressed, including the current focus on roles in the STEM publication process, which will be tackled in future phases. The speakers in this session will share their views on the current and future value of CRediT, how that can be maximized in future, and what challenges will need to be overcome for us to be successful.
The impact of the COVID19 pandemic and the Black Lives Matter protests in 2020 were (and continue to be) felt all around the world — including the information world: the rapid shift to online learning; the dramatic increase in the use of digital resources; the challenges of working from home; budget and hiring cuts and freezes. Which of these changes — and more — will be permanent? What will the information community look like five or ten years from now? The lessons we learn will help us better understand the fragility or resilience of our organizations and structures, our processes and policies. This session includes perspectives from librarians, publishers, and vendors from around the world about what their experiences in 2020 have taught them.
The impact of the COVID19 pandemic and the Black Lives Matter protests in 2020 were (and continue to be) felt all around the world — including the information world: the rapid shift to online learning; the dramatic increase in the use of digital resources; the challenges of working from home; budget and hiring cuts and freezes. Which of these changes — and more — will be permanent? What will the information community look like five or ten years from now? The lessons we learn will help us better understand the fragility or resilience of our organizations and structures, our processes and policies. This session includes perspectives from librarians, publishers, and vendors from around the world about what their experiences in 2020 have taught them.

We are NISO, the National Information Standards Organization. We’re the place where information professionals turn for industry standards that enable them to work together. Let us help keep you informed about what’s happening at NISO and in the wider information community by signing up for emails in your area(s) of interest.