• Ubiquitous health profile (UHPr): a big data curation platform for supporting health data interoperability

      Satti, Fahad Ahmed; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Khattak, Asad Masood; Lee, Sungyoung; Kyung Hee University, Global Campus, Yongin, South Korea; Hamad Bin Khalifa University (HBKU), Education City, Doha, Qatar; University of Derby; Zayed University, Abu Dhabi, UAE (Springer, 2020-08-19)
      The lack of Interoperable healthcare data presents a major challenge, towards achieving ubiquitous health care. The plethora of diverse medical standards, rather than common standards, is widening the gap of interoperability. While many organizations are working towards a standardized solution, there is a need for an alternate strategy, which can intelligently mediate amongst a variety of medical systems, not complying with any mainstream healthcare standards while utilizing the benefits of several standard merging initiates, to eventually create digital health personas. The existence and efficiency of such a platform is dependent upon the underlying storage and processing engine, which can acquire, manage and retrieve the relevant medical data. In this paper, we present the Ubiquitous Health Profile (UHPr), a multi-dimensional data storage solution in a semi-structured data curation engine, which provides foundational support for archiving heterogeneous medical data and achieving partial data interoperability in the healthcare domain. Additionally, we present the evaluation results of this proposed platform in terms of its timeliness, accuracy, and scalability. Our results indicate that the UHPr is able to retrieve an error free comprehensive medical profile of a single patient, from a set of slightly over 116.5 million serialized medical fragments for 390,101 patients while maintaining a good scalablity ratio between amount of data and its retrieval speed.
    • An ultrasonic atomisation unit for heat and moisture exchange humidification device for intensive care medicine applications

      Mahmoud Shafik; University of Derby (WJRR, 2016-04)
      The state of the art of the existing heat and moisture exchange (HME) technology in use concludes that there are two main artificial humidification HME devices: active and passive device. The active device is complicated to use and expensive. The passive HME device is the preferred one, due to the ease of use and low cost. However it is not suitable for more than 24 hour use. This is due to current devices cavity design, limitations of HME materials performance and overall device efficiency. This paper presents the outcomes of the research work carried out to overcome these teething issues and presents a piezoelectric ultrasonic atomisation device for passive humidification device. This aims to improve the device heat and moisture exchange (HME) materials performance, by recovering the accumulated moisture, for a greater patient care. The atomisation device design, structure, working principles and analysis using finite element analysis (FEA) is discussed and presented in this paper. The computer simulation and modeling using FEA for the atomisation device has been used to examine the device structure.It also enabled to select the material of the active vibration transducer ring, investigate the material deformation, defining the operating parameters for the device and establish the working principles of atomization unit. A working prototype has been fabricated to test the device, technical parameters, performance and practicality to use in such intensive care applications. Experimental tests showed that the electrical working parameters of the device are: Current: 50 m-amps, Voltage: 50 volts, Frequency: 41.7 kHz. The atomization device has been integrated into the passive HME humidification device and initial results show some improvement in moisture return of the device by 2.5 mg per liter H2O. This is show the potential of the developed unit to improve the HME material performance in such working environment.
    • Understanding and managing sound exposure and noise pollution at outdoor events

      Hill, Adam J.; University of Derby (Audio Engineering Society, 2020-05-22)
      This report is intended to present the current state of affairs surrounding the issue of outdoor event-related sound and noise. The two principal areas of investigation are sound exposure on-site and noise pollution off-site. These issues are different in nature and require distinct approaches to mitigate the associated negative short-term and long-term effects. The key message that is presented throughout this report is that the problems/ambiguities with current regulations are due to a lack of unbiased, scientifically-based research. It is possible to deliver acceptably high sound levels to audience members in a safe manner (minimizing risk of hearing damage) while also minimizing annoyance in local communities, where solutions to the on-site and off-site problems should begin with a well-informed sound system design. Only with a properly designed sound system can sound/noise regulations be realistically applied.
    • Unifying local-global type properties in vector optimization.

      Bagdasar, Ovidiu; Popovici, Nicolae; University of Derby (Springer, 2018-04-21)
      It is well-known that all local minimum points of a semistrictly quasiconvex real-valued function are global minimum points. Also, any local maximum point of an explicitly quasiconvex real-valued function is a global minimum point, provided that it belongs to the intrinsic core of the function’s domain. The aim of this paper is to show that these “local min - global min” and “local max - global min” type properties can be extended and unified by a single general localglobal extremality principle for certain generalized convex vector-valued functions with respect to two proper subsets of the outcome space. For particular choices of these two sets, we recover and refine several local-global properties known in the literature, concerning unified vector optimization (where optimality is defined with respect to an arbitrary set, not necessarily a convex cone) and, in particular, classical vector/multicriteria optimization.
    • Unsupervised deep learning for real-time assessment of video streaming services

      Vega, Maria Torres; Mocanu, Decebal Constantin; Liotta, Antonio (Springer US, 2017)
    • Unveiling the strong interaction among hadrons at the LHC

      Barnby, Lee; ALICE Collaboration; STFC Daresbury Laboratory; University of Derby (Springer Science and Business Media LLC, 2020-12-09)
      One of the key challenges for nuclear physics today is to understand from first principles the effective interaction between hadrons with different quark content. First successes have been achieved using techniques that solve the dynamics of quarks and gluons on discrete space-time lattices. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons and so high-quality measurements exist only for hadrons containing up and down quarks. Here we demonstrate that measuring correlations in the momentum space between hadron pairs produced in ultrarelativistic proton–proton collisions at the CERN Large Hadron Collider (LHC) provides a precise method with which to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate how, using precision measurements of proton–omega baryon correlations, the effect of the strong interaction for this hadron–hadron pair can be studied with precision similar to, and compared with, predictions from lattice calculations. The large number of hyperons identified in proton–proton collisions at the LHC, together with accurate modelling of the small (approximately one femtometre) inter-particle distance and exact predictions for the correlation functions, enables a detailed determination of the short-range part of the nucleon-hyperon interaction.
    • Use of artificial intelligence to improve resilience and preparedness against adverse flood events

      Saravi, Sara; Kalawsky, Roy; Joannou, Demetrios; Rivas Casado, Monica; Fu, Guangtao; Meng, Fanlin; Loughborough University (MDPI AG, 2019-05-09)
      The main focus of this paper is the novel use of Artificial Intelligence (AI) in natural disaster, more specifically flooding, to improve flood resilience and preparedness. Different types of flood have varying consequences and are followed by a specific pattern. For example, a flash flood can be a result of snow or ice melt and can occur in specific geographic places and certain season. The motivation behind this research has been raised from the Building Resilience into Risk Management (BRIM) project, looking at resilience in water systems. This research uses the application of the state-of-the-art techniques i.e., AI, more specifically Machin Learning (ML) approaches on big data, collected from previous flood events to learn from the past to extract patterns and information and understand flood behaviours in order to improve resilience, prevent damage, and save lives. In this paper, various ML models have been developed and evaluated for classifying floods, i.e., flash flood, lakeshore flood, etc. using current information i.e., weather forecast in different locations. The analytical results show that the Random Forest technique provides the highest accuracy of classification, followed by J48 decision tree and Lazy methods. The classification results can lead to better decision-making on what measures can be taken for prevention and preparedness and thus improve flood resilience.
    • Using SeaWiFS measurements to evaluate radiometric stability of pseudo-invariant calibration sites at top of atmosphere

      Li, Chi; Xue, Yong; Liu, Quanhua; Ouazzane, Karim; Zhang, Jiahua; Institute of Remote Sciences and Digital Earth; University of Maryland; London Metropolitan University (IEEE, 2014-06-30)
      The Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) data from 1997 to 2001 are adopted to monitor the radiometric stability of six pseudo-invariant calibration sites (PICSs) at the top of atmosphere (TOA). Cloud-free and homogeneous observations of the spectral TOA reflectance ρTOA at eight SeaWiFS channels over these sites are fitted to the Ross-Li bidirectional reflectance distribution function (BRDF) model, and the time series of BRDF-normalized spectral TOA reflectance RTOA is presented and analyzed afterward. Overall, good stability during the evaluated period is exhibited as more than half of the derived trends are statistically insignificant, whereas root mean square (RMS) of the BRDF modeling residuals reveal spectral dependence of the PICSs' stability at TOA, i.e., the uncertainty of RTOA appears to be larger at shortwave visible (SV) channels (~2.5%) compared with that of red/NIR bands (~1%). In addition, the early mission data adopted in our study shows favorable reliability thus is recommended to be applied for similar purposes.
    • Using TRIZ in teaching and learning

      Robertson-Begg, John; University of Derby (2016-07-04)
      This is a presentation covering the background to TRIZ and how it is incorporated into Teaching and Learning.
    • Validation of aerosol products from AATSR and MERIS/AATSR synergy algorithms—Part 1: Global Evaluation.

      Che, Yahui; Mei, Linlu; Xue, Yong; Guang, Jie; She, Lu; Li, Ying; University of Derby; Chinese Academy of Sciences; University of Chinese Academy of Sciences; University of Bremen; et al. (MPDI, 2018-09-06)
      The European Space Agency’s (ESA’s) Aerosol Climate Change Initiative (CCI) project intends to exploit the robust, long-term, global aerosol optical thickness (AOT) dataset from Europe’s satellite observations. Newly released Swansea University (SU) aerosol products include AATSR retrieval and synergy between AATSR and MERIS with a spatial resolution of 10 km. In this study, both AATSR retrieval (SU/AATSR) and AATSR/MERIS synergy retrieval (SU/synergy) products are validated globally using Aerosol Robotic Network (AERONET) observations for March, June, September, and December 2008, as suggested by the Aerosol-CCI project. The analysis includes the impacts of cloud screening, surface parameterization, and aerosol type selections for two products under different surface and atmospheric conditions. The comparison between SU/AATSR and SU/synergy shows very accurate and consistent global patterns. The global evaluation using AERONET shows that the SU/AATSR product exhibits slightly better agreement with AERONET than the SU/synergy product. SU/synergy retrieval overestimates AOT for all surface and aerosol conditions. SU/AATSR data is much more stable and has better quality; it slightly underestimates fine-mode dominated and absorbing AOTs yet slightly overestimates coarse-mode dominated and non-absorbing AOTs.
    • A validation of security determinants model for cloud adoption in Saudi organisations’ context

      Alassafi, Madini O.; Atlam, Hany F.; Alshdadi, Abdulrahman A.; Alzahrani, Abdullah I.; AlGhamdi, Rayed A.; Buhari, Seyed M.; University of Southampton (Springer, 2019-08-30)
      Governments across the world are starting to make a dynamic shift to cloud computing so as to increase efficiency. Although, the cloud technology brings various benefits for government organisations, including flexibility and low cost, adopting it with the existing system is not an easy task. In this regard, the most significant challenge to any government agency is security concern. Our previous study focused to identify security factors that influence decision of government organisations to adopt cloud. This research enhances the previous work by investigating on the impact of various independent security related factors on the adopted security taxonomy based on critical ratio, standard error and significance levels. Data was collected from IT and security experts in the government organisations of Saudi Arabia. The Analysis of Moment Structures (AMOS) tool was used in this research for data analysis. Critical ratio reveals the importance of Security Benefits, Risks and Awareness Taxonomies on cloud adoption. Also, most of the exogenous variables had strong and positive relationships with their fellow exogenous variables. In future, this taxonomy model can also be applied for studying the adoption of new IT innovations whose IT architecture is similar to that of the cloud.
    • Vehicular cloud networks: Architecture and security

      Ahmad, Farhan; Kazim, Muhammad; Adnane, Asma; University of Derby (Springer, 2015)
      Cloud computing has been widely adopted across the IT industry due to its scalable, cost-effective, and efficient services. It has many applications in areas such as healthcare, mobile cloud computing (MCC), and vehicular ad hoc networks (VANET). Vehicular cloud networks (VCN) is another application of cloud computing which is a combination of cloud and VANET technologies. It is composed of three clouds named vehicular cloud, infrastructure cloud, and traditional IT cloud. In this chapter, the three clouds involved in VCN are presented using a three-tier architecture, and the security issues related to each tier are described in detail. After describing the detailed architecture of VANET, their components, and their important characteristics, this chapter presents the architecture of VCN. It is followed by the detailed analysis of the threats to which each tier-cloud of VCN is vulnerable.
    • Vehicular cloud networks: Architecture, applications and security issues

      Ahmad, Farhan; Kazim, Muhammad; Adnane, Asma; Abir, Awad; University of Derby, Derby, UK; Software Res. Inst., Athlone Inst. of Technol., Athlone, Ireland (IEEE, 2015-12-07)
      Vehicular Ad Hoc Networks (VANET) are the largest real life application of ad-hoc networks where nodes are represented via fast moving vehicles. This paper introduces the future emerging technology, i.e., Vehicular Cloud Networking (VCN) where vehicles and adjacent infrastructure merge with traditional internet clouds to offer different applications ranging from low sized applications to very complex applications. VCN is composed of three types of clouds: Vehicular cloud, Infrastructure cloud and traditional Back-End (IT) cloud. We introduced these clouds via a three tier architecture along with their operations and characteristics. We have proposed use cases of each cloud tier that explain how it is practically created and utilised while taking the vehicular mobility in consideration. Moreover, it is critical to ensure security, privacy and trust of VCN network and its assets. Therefore, to describe the security of VCN, we have provided an in-depth analysis of different threats related to each tier of VCN. The threats related to vehicular cloud and infrastructure cloud are categorized according to their assets, i.e., vehicles, adjacent infrastructure, wireless communication, vehicular messages, and vehicular cloud threats. Similarly, the Back-End cloud threats are categorized into data and network threats. The possible implications of these threats and their effects on various components of VCN are also explained in detail.
    • Vehicular sensor networks: Applications, advances and challenges

      Kurugollu, Fatih; Ahmed, Syed Hassan; Hussain, Rasheed; Ahmad, Farhan; Kerrache, Chaker Abdelaziz; Cyber Security Research Group, College of Engineering and Technology, University of Derby, UK; JMA Wireless, Liverpool, NY 13088, USA; Institute of Information Systems, Innopolis University, 420500 Innopolis, Russia; Department of Mathematics and Computer Science, University of Ghardaia, Ghardaia 4700, Algeria (MDPI, 2020-07-01)
      Vehicular sensor networks (VSN) provide a new paradigm for transportation technology and demonstrate massive potential to improve the transportation environment due to the unlimited power supply of the vehicles and resulting minimum energy constraints. This special issue is focused on the recent developments within the vehicular networks and vehicular sensor networks domain. The papers included in this Special Issue (SI) provide useful insights to the implementation, modelling, and integration of novel technologies, including blockchain, named data networking, and 5G, to name a few, within vehicular networks and VSN.
    • Verifiable public key encryption scheme with equality test in 5G networks

      Xu, Yan; Wang, Ming; Zhong, Hong; Cui, Jie; Liu, Lu; Franqueira, Virginia N. L.; Anhui University; University of Derby (IEEE, 2017-06-19)
      The emergence of 5G networks will allow Cloud Computing providers to offer more convenient services. However, security and privacy issues of cloud services in 5G networks represent huge challenges. Recently, to improve security and privacy, a novel primitive was proposed by Ma et al. in TIFS 2015, called Public Key Encryption with Equality Test supporting Flexible Authorization (PKEET-FA). However, the PKEET scheme lacks verification for equality test results to check whether the cloud performed honestly. In this research, we expand the study of PKEET-FA and propose a verifiable PKEET scheme, called V-PKEET, which, to the best of our knowledge, is the first work that achieves verification in PKEET. Moreover, V-PKEET has been designed for three types of authorization to dynamically protect the privacy of data owners. Therefore, it further strengthens security and privacy in 5G networks.
    • Video authentication based on statistical local information

      Al-Athamneh, Mohammad; Crookes, Danny; Farid, Mohsen; University of Derby (IEEE, 2016-12-06)
      With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer’s cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video’s statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.
    • Video stream analysis in clouds: An object detection and classification framework for high performance video analytics

      Anjum, Ashiq; Abdullah, Tariq; Tariq, M. Fahim; Baltaci, Yusuf; Antonopoulos, Nikolaos; University of Derby, UK (IEEE, 2016-01-13)
      Email Print Request Permissions Object detection and classification are the basic tasks in video analytics and become the starting point for other complex applications. Traditional video analytics approaches are manual and time consuming. These are subjective due to the very involvement of human factor. We present a cloud based video analytics framework for scalable and robust analysis of video streams. The framework empowers an operator by automating the object detection and classification process from recorded video streams. An operator only specifies an analysis criteria and duration of video streams to analyse. The streams are then fetched from a cloud storage, decoded and analysed on the cloud. The framework executes compute intensive parts of the analysis to GPU powered servers in the cloud. Vehicle and face detection are presented as two case studies for evaluating the framework, with one month of data and a 15 node cloud. The framework reliably performed object detection and classification on the data, comprising of 21,600 video streams and 175 GB in size, in 6.52 hours. The GPU enabled deployment of the framework took 3 hours to perform analysis on the same number of video streams, thus making it at least twice as fast than the cloud deployment without GPUs.
    • Virtual vignettes: the acquisition, analysis, and presentation of social network data

      Howden, Chris; Liu, Lu; Li, Zhiyuan; Li, Jianxin; Antonopoulos, Nikolaos; University of Derby (Springer, 2014-06-15)
      Online social networks (OSNs) are immensely prevalent and have now become a ubiquitous and important part of the modern, developed society. However, online social networks pose significant problems to digital forensic investigators who have no experience online. Data will reside on multiples of servers in multiple countries, across multiple jurisdictions. Capturing it before it is overwritten or deleted is a known problem, mirrored in other cloud based services. In this article, a novel method has been developed for the extraction, analysis, visualization, and comparison of snapshotted user profile data from the online social network Twitter. The research follows a process of design, implementation, simulation, and experimentation. Source code of the tool that was developed to facilitate data extraction has been made available on the Internet.
    • Well-quasi-order for permutation graphs omitting a path and a clique

      Korpelainen, Nicholas; Atminas, Aistis; Brignall, Robert; Vatter, Vincent; Lozin, Vadim; University of Derby (Australian National University, 2015-04-29)
      We consider well-quasi-order for classes of permutation graphs which omit both a path and a clique. Our principle result is that the class of permutation graphs omitting P5 and a clique of any size is well-quasi-ordered. This is proved by giving a structural decomposition of the corresponding permutations. We also exhibit three infinite antichains to show that the classes of permutation graphs omitting {P6,K6}, {P7,K5}, and {P8,K4} are not well-quasi-ordered.
    • WFS and HOA: Simulations and evaluations of planar higher order ambisonic, wave field synthesis and surround hybrid algorithms for lateral spatial reproduction in theatre.

      Vilkaitis, Alexander; University of Derby (Verband Deutscher Tonmeister, 2017-09)
      Wave Field Synthesis and Higher Order Ambisonics are both spatialisation techniques that could be applied to theatre sound design, but practicalities such as the number of loudspeakers and space required limit their use. Practical setups could consist of a planar array across the stage (for performer localisation) and surround speakers around the auditorium in different configurations (for ambience). This research simulates the use of extrapolated and truncated arrays, with HOA and WFS algorithms in order to create a panned frontal dominant system with potentially increased intelligibility due to source separation and spatial unmasking. Hybrid methods where WFS and ambisonics are used simultaneously will be evaluated to create a system for theatre that is both psychoacoustically sound, homogenous and practicable.