Page 216 - Journal of Library Science in China, Vol.47, 2021
P. 216

Ronda J. ZHANG & Fred Y. YE / Measuring knowledge hardness for quantifying backbone knowledge  215


                 Hence, distinguishing between necessary and non-necessary diagrams will be a task in future
               knowledge hardness measurements, which requires a deep understanding of the knowledge
               content. Similarly, when calculating the number of formulas, equations derived in the process
               should be excluded, with only the concluding formulas being deemed essential.
                 Moreover, disciplines with many formulas and diagrams tend to produce more reference manuals
               for consultation. So, an extended hypothesis is: the knowledge hardness of a discipline is directly
               proportional to the number of reference manuals required for that discipline and their usage rate. If
               a discipline doesn’t have a lot of core knowledge, compiling a reference manual for it would not be
               easy. Therefore, directly using the number of reference manuals for a discipline as an estimate for
               its knowledge hardness is also a measurement option.
                 To elevate the “knowledge hardness” within the field of Library and Information Science, we
               suggest following approaches: (1) systematically consolidating core knowledge, refining pivotal
               knowledge points; (2) integrating these pivotal knowledge points into essential formulas and
               requisite diagrams; (3) constructing a knowledge system around these fundamental formulas and
               necessary diagrams. This represents the imminent challenge for Library and Information Science.
               Of course, formulas, images, and tables with weak relevance to core disciplinary knowledge should
               be avoided. For instance, in the “Foundations of Information Science” textbook, various statistical
               formulas and software screenshots have been included. This inclusion resulted in a measured
               knowledge hardness (0.42) higher than that of Economics (0.25). If these non-essential formulas
               and diagrams were excluded, the knowledge hardness would significantly decrease.
                 The introduction of this method to measure disciplinary knowledge hardness aims to quantitatively
               assess it, comparing the evolution of hardness across different epochs and disciplines. In practical
               application, there are unavoidable shortcomings, such as the impact of textbook style variations,
               targeted learners of the discipline, and their requirements on the outcome metrics; or the challenge
               of distinguishing essential diagrams and formulas that better represent discipline hardness without
               manual intervention. Nevertheless, the measurement metrics provide a reasonably objective grasp of
               the overall direction of disciplinary hardness. It is essential to prevent any future misinterpretation
               of the hardness metric, mechanically using it to evaluate the discipline, or deliberately increasing
               diagrams and formulas during textbook compilation. Such actions would deviate significantly from
               the original intention of providing an authentic measure of disciplinary hardness.
                 Compared to established disciplines, Library and Information Science faces challenges in
               constructing core knowledge and knowledge systems. It frequently chases trendy topics, neglecting
               the buildup of its own discipline’s knowledge hardness and depth. Is it overly ambitious to forsake
               foundational methodologies like classification, abstracting, and indexing – which are fundamental
               to information resource and knowledge management – in favor of chasing computer technologies
               like cloud computing and artificial intelligence? Is it not presumptuous to attempt to influence
               millennia-old foundations of science and humanities with immature knowledge and disciplinary
               hardness? Isn’t it overreaching to assume the significant responsibility of academic evaluation
   211   212   213   214   215   216   217   218   219   220   221