Skip to main content
Favorite
Add To List
NISO Virtual Conference

NISO Virtual Conference

Matching Videos

9 Matching Videos
It’s a muddled area for libraries, content providers and readers. Long-form content has traditionally been contained in printed volumes both for reasons of consumption as well as convenient access. With the arrival of ebooks, some aspects of engaging with long-form content became a bit easier – searchability, mobility, etc. Still, neither form seems to fully satisfy. Each user learns his or her own best practices for reading and referencing book content. Is it any wonder then that those whose scholarship relies on long-form content are suspicious of proposed changes to book production, delivery and access? This virtual conference will consider from a variety of perspectives issues associated with creation, publication, and distribution of The Book. Speakers may explore metrics of usage (downloads, duration of reading session, etc.) as well as questions of reader behavior, assignment of metadata, and long-term access to licensed digital content.
One on-going concern in scholarly communication has to do with publication time lags and ultimately, any delays to research dissemination. How can publishing systems more efficiently support peer-review? How rapidly can a manuscript move from completed draft to the status of preprint to a final version of record? Certainly in recent years, there have been calls for more efficient and more transparent manuscript transfer and exchange. However, ensuring quality of publication has always entailed a certain degree of lag as materials moved through the editorial and production process. This event will examine some of the nuances of the process as well as emerging possibilities for improvement. A natural follow-up question then would be how best to guard against predatory publishers – those who would seduce researchers into submitting good work to questionable periodicals. No author wants to pay hefty feeds for publication lacking the checks of peer review or editorial oversight. Are whitelists (or conversely, blacklists) the right approach in guiding researchers to the best journals for their scholarly output? What about badges for publications (whether in traditional formats or not)? Or will such protective approaches simply expand existing issues associated with regard to metrics for use in gauging impact and/or reach?
The primary selling point of metrics for the academic researcher was the promise that the proof provided by such metrics of the value of one’s work would be the increased and long-term funding needed to do such work. Prestige, tenure, influence, even celebrity -- these have been stepping stones to securing significant (and much-needed) grants to educational institutions of all sizes and types. But have these incentives been subverted over time or in specific ways? Is the drive to publish-or-perish the best mechanism for encouraging substantive study? The integrity of the publishing process and perhaps the integrity of the funding model for higher education itself is at stake. This session will look at some of the troubling questions surrounding the incentives offered to the working scholar, researcher, and scientist. Presenters in this virtual conference will consider the following questions: · How might institutions and research facilities best weld available indicators of use or influence into a meaningful metric? · If individual scholarship is best gauged by the value assigned to it by the larger community, then what collection of metrics should be gathered for purposes of determining appropriate rewards in the context of academia? · How might institutions better address this challenge and reward faculty appropriately?
Projects that are built on top of multiple open data sets are beginning to be more visible to the public. This virtual conference will serve as an expansive tour of a variety of open data projects from academia, local government, and other sectors. Looking for inspiration, useful examples or just the opportunity to learn what’s possible? This virtual event will spotlight novel approaches as well as practical activities.
Current thinking is that scientific research should be readily reproducible, discoverable, and openly accessible. There is also significant drive to develop open educational resources in the interests of easing economic burdens on student populations. The challenge then for libraries, content providers and platform providers is how best to implement strategies, technologies and practices in support of those concerns. But there are questions that must be addressed in discussing open science, open educational resources, open access monographs, etc. What supports are necessary in bringing this open approach into reality? What may be feasible in building an inclusive and collaborative knowledge infrastructure in this environment? What are key elements or best practices? What fiscal models or arrangements might be needed to ensure sustainability? Which sector (academic, government/public, commercial, etc.) is best positioned to muster the necessary resources?
Recent years have introduced a variety of new technologies into the mainstream, such as artificial intelligence, data science, and virtual and augmented reality. As the research community increasingly uses these tools and techniques to generate findings, what are the needs of the library in supporting the research activity as well as the resulting output? This virtual conference will explore technologies supported by the modern research library and the impact on both workflow and workforce. The first block of the day will consist of discussions of the administrative view of new technologies impacting on the library with the rest of the day given over to case studies.
Super computing is used by scientists and engineers working on complex research problems. Such investigations may involve data-intensive applications that consume enormous amounts of bandwidth and computing power. Instruction on campus is increasingly tied to learning management systems, which require seamless integration with information resources found in libraries. At the same time, libraries are expressing numerous concerns associated with digital asset and access management. Where is the institutional IT department and just how far can its resources be stretched? This virtual event will look at systems demands found on campus and offer examples of how innovative research institutions (and most particularly, their libraries) are meeting the challenges of talent-sourcing, integration, and support. The first block of speakers address the topic at a high level, the second block of speakers address specific types of computing and IT services that the library is asked to support in some fashion and the third discusses policy concerns such as privacy and ethical use of collected data.
Rob Sanderson of the J. Paul Getty Trust tweeted in 2018 that “The interface /is/ the application, regardless of the technology. Building better interfaces is building a better world.” What are the implications of that for both library and vendor communities? Data sets, open educational resources, video and audio files are part and parcel of academic activity. Such output may be properly housed on institutional servers but is the associated metadata for those materials sufficient to enable reuse by others in the long-term? What might libraries need to do to better support discovery and reuse of research output that has not been (or may never be) fully integrated with more traditional publication formats? What elements (descriptive or otherwise) might need to be included in order for users to understand the potential reuse of the material? And at the same time, is it reasonable to expect a single interface to satisfy the diverse needs of the domain expert, the interdisciplinary scholar, as well as the undergraduate just beginning to explore? How complex can a useful interface be? Is it possible to reverse devotion to the single search box? It’s time to talk about design and use of a service’s native interface!
According to Wikipedia, the preprint is a “version of a scholarly or scientific paper that precedes publication in a peer-reviewed scholarly or scientific journal”. Preprint archives, such as arXiv and SSRN, rapidly achieved prominence in both the hard and social sciences as rapid access to new work became a priority. It’s wonderful to have those platforms, but what are best practices for libraries and other content providers in working with them? Should preprints be assigned DOIs? What relationship should exist between pre-prints and discovery services? What is the interoperability with link resolvers like? What are the implications for citation practices?